WMT2017 German-English machine translation challenge for news

Translate news articles from German into English.

# submitter when ver. description dev-0 BLEU dev-1 BLEU test-A BLEU
119 MSz 2019-05-22 21:02 1.0.0 marian 100k tg freq 10000 0.06805 0.07651 0.06824
110 MSz 2019-05-22 18:44 1.0.0 marian 100k freq 10000 0.11676 0.13285 0.11359
9 MSz 2019-05-22 12:01 1.0.0 marian 1M neural-network marian 0.23935 0.27904 0.24561
50 MSz 2019-05-22 11:47 1.0.0 marian 1M tg neural-network marian 0.17381 0.20079 0.18072
12 MSz 2019-02-05 11:36 1.0.0 type=s2s, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network marian 0.23399 0.27282 0.23674
13 MSz 2019-01-22 17:17 1.0.0 type=amun, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network marian 0.23293 0.27193 0.23254
17 MSz 2019-01-12 12:57 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, workspace 3000, postproc sed deescapeSpecialChars detruecase awk sed 0.20598 0.24117 0.21002
18 MSz 2019-01-12 11:20 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase awk 0.20547 0.24024 0.20857
19 MSz 2019-01-11 10:18 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase 0.20547 0.24024 0.20857
25 MSz 2019-01-11 10:08 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars 0.19604 0.23046 0.19946
32 MSz 2019-01-11 10:06 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed 0.18594 0.22086 0.19142
94 MSz 2019-01-10 22:12 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, no postproc 0.16705 0.20042 0.17246
108 MSz 2019-01-03 22:12 1.0.0 with awk simple postproc on out 0.13057 0.15454 0.12454
116 MSz 2018-12-09 14:47 1.0.0 Second dell commit 0.09811 0.11533 0.09146
3 Durson 2018-02-15 00:38 1.0.0 Tensorflow 80k iterations ; beam 4 alpha 0.9 ready-made neural-network 0.25882 0.30561 0.26322
4 Durson 2018-02-15 00:05 1.0.0 Tensorflow 80k iterations ; beam 3 alpha 0.6 ready-made neural-network 0.25720 0.30334 0.26143
5 Durson 2018-02-14 23:18 1.0.0 Tensorflow 86k iterations ; beam 3 alpha 0.6 ready-made neural-network 0.25409 0.30126 0.25949
10 Durson 2018-02-14 11:47 1.0.0 Tensorflow 50k iterations ; beam 20 alpha 0.6 ready-made neural-network 0.23913 0.28295 0.24414
30 dk 2018-02-07 11:10 1.0.0 Add 5G data moses 0.19762 0.22481 0.19183
29 dk 2018-02-07 11:00 1.0.0 Add 5G data N/A 0.22481 0.19183
48 dk 2018-02-07 10:51 1.0.0 improve solution -stack 155 moses 0.18141 0.20750 0.18236
47 dk 2018-02-07 10:47 1.0.0 Improve sollution -stack 155 moses 0.18141 N/A 0.18236
122 EmEm 2018-02-04 23:59 1.0.0 'baseline' moses 0.02757 0.02569 0.02823
26 deinonzch 2018-01-31 11:43 1.0.0 corpus=590616, NB_OF_EPOCHS=8, MAX_WORDS=46000 neural-network 0.18610 0.21637 0.19461
46 dk 2018-01-24 11:05 1.0.0 improve solution moses 0.18141 N/A 0.18236
103 p/tlen 2018-01-17 06:46 1.0.0 NMT with Marian, vocabulary=70K, epochs=7 0.14849 0.17603 0.15308
65 deinonzch 2018-01-16 18:36 1.0.0 --search-algorithm 1 -s 2000 --cube-pruning-pop-limit 2000 --cube-pruning-diversity 100-b 0.1 --minimum-bayes-risk moses 0.17767 0.20160 0.17646
100 p/tlen 2018-01-15 09:13 1.0.0 NMT trained with Marian on 10%, 5 epochs, 40K dictionary neural-network 0.15263 0.17750 0.15966
124 EmEm 2018-01-14 16:45 1.0.0 'ibm self-made algo N/A N/A 0.02608
134 EmEm 2018-01-14 16:33 1.0.0 ibm1 N/A N/A 0.00762
66 MF 2018-01-13 21:39 1.0.0 Baseline 10%, stack 200 beam 0.1 moses 0.17469 0.19716 0.17625
95 MF 2018-01-13 21:22 1.0.0 0.17468 0.19761 0.17224
98 MF 2018-01-13 19:21 1.0.0 0.17009 0.19237 0.16779
121 MF 2018-01-13 19:12 1.0.0 0.06572 0.07063 0.06317
1 p/tlen 2018-01-09 18:10 1.0.0 WMT16 neural model (decoded with Amun) + de-escape apostrophes neural-network 0.27932 0.33703 0.28988
2 p/tlen 2018-01-08 21:13 1.0.0 neural model (decoded with Amun) neural-network 0.27358 0.33058 0.28454
67 MF 2018-01-08 18:13 1.0.0 0.17546 0.19893 0.17588
82 MF 2018-01-08 18:04 1.0.0 0.17546 0.19893 0.17369
39 [anonymised] 2018-01-08 17:57 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 100 0.18151 0.20738 0.18358
35 [anonymised] 2018-01-08 16:35 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 150 moses 0.18138 0.20709 0.18379
37 [anonymised] 2018-01-08 15:45 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 200 0.18117 0.20689 0.18378
41 [anonymised] 2018-01-08 15:02 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 2000 0.18119 0.20659 0.18328
44 [anonymised] 2018-01-08 14:58 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 -cube-pruning-pop-limit 2000 -cube-pruning-diversity 500 0.18131 0.20651 0.18324
40 [anonymised] 2018-01-07 21:47 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 0.18135 0.20681 0.18347
43 [anonymised] 2018-01-07 16:29 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -beam-threshold 0.01 0.18129 0.20673 0.18326
42 [anonymised] 2018-01-07 13:30 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 1000 0.18122 0.20661 0.18327
38 [anonymised] 2018-01-07 13:25 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 0.18098 0.20696 0.18370
52 MSz 2018-01-03 14:57 1.0.0 MERTed && --beam-thresholds moses 0.17955 0.20203 0.17971
150 [anonymised] 2018-01-03 14:30 1.0.0 TAU-2017-20 - check 6 values for maximum stack size; plot graphs for BLEU and decoding time moses 0.17544 0.19840 N/A
91 deinonzch 2018-01-03 13:30 1.0.0 improve solution search-algorithm 1 -s 0 --cube-pruning-pop-limit 5000 --cube-pruning-diversity 100 0.17607 0.19839 0.17336
81 deinonzch 2018-01-03 09:39 1.0.0 improve solution search-algorithm 1 -s 0 0.17598 0.19873 0.17377
21 Weronika 2018-01-03 00:06 1.0.0 dev-0 dev-1 test-A -stack 1500 moses 0.21264 0.23117 0.20230
20 Weronika 2018-01-02 22:05 1.0.0 test-A/out.tsv -stack 1500 0.21264 0.23071 0.20230
24 Weronika 2018-01-02 20:30 1.0.0 dev-0/out.tsv 0.2126 -stack 1500 0.21264 0.23071 0.20213
84 deinonzch 2018-01-02 18:18 1.0.0 improve solution search-algorithm 1 -cube-pruning-pop-limit 2000 -s 2000 0.17599 0.19856 0.17358
104 deinonzch 2018-01-02 17:36 1.0.0 improve solution search-algorithm 1 and beam-threshold 10 0.13985 0.15876 0.13409
106 deinonzch 2018-01-02 17:24 1.0.0 improve solution search-algorithm 1 and beam-threshold 100 0.13829 0.15653 0.13114
105 deinonzch 2018-01-02 17:05 1.0.0 improve solution search-algorithm 1 and beam-threshold 100 0.13839 0.15655 0.13129
77 deinonzch 2018-01-02 16:25 1.0.0 --search-algorithm 1=cube pruning --stack 100 moses 0.17572 0.19862 0.17403
88 deinonzch 2018-01-02 16:15 1.0.0 --search-algorithm 0=normal stack -stack 100 moses 0.17569 0.19868 0.17357
70 deinonzch 2018-01-02 16:00 1.0.0 --search-algorithm 1=cube pruning 0.17596 0.19845 0.17423
85 deinonzch 2018-01-02 15:28 1.0.0 default 0.17561 0.19857 0.17358
23 Weronika 2018-01-02 11:29 1.0.0 dev-0/out.tsv 0.1982 -stack 1500 0.19816 0.23071 0.20213
51 MSz 2018-01-01 16:01 1.0.0 slightly improved -beam-threshold 0.25 -stack 152 moses 0.17955 0.20203 0.17971
54 MSz 2018-01-01 14:12 1.0.0 slightly improved -stack 154 0.17951 0.20215 0.17957
55 MSz 2018-01-01 12:22 1.0.0 slightly improved -stack 152 0.17951 0.20216 0.17957
60 MSz 2017-12-31 10:12 1.0.0 MERTed && --beam-threshold 0.0625 0.17950 0.20199 0.17956
59 MSz 2017-12-31 09:11 1.0.0 MERTed && --beam-threshold 0.125 0.17950 0.20189 0.17956
53 MSz 2017-12-31 00:49 1.0.0 MERTed && --beam-threshold 0.25 0.17948 0.20195 0.17965
61 MSz 2017-12-31 00:05 1.0.0 MERTed && --beam-threshold 0.5 0.17871 0.20147 0.17916
101 MSz 2017-12-30 20:25 1.0.0 MERTed && --beam-threshold 1 0.15859 0.18110 0.15936
107 MSz 2017-12-30 20:00 1.0.0 MERTed && --beam-threshold 2 0.12788 0.15173 0.12880
111 MSz 2017-12-30 19:29 1.0.0 MERTed && --beam-threshold 4 0.11115 0.12879 0.11071
112 MSz 2017-12-30 18:59 1.0.0 MERTed && --beam-threshold 8 0.10379 0.11938 0.10410
113 MSz 2017-12-30 18:33 1.0.0 MERTed && --beam-threshold 16 0.10225 0.11686 0.10241
114 MSz 2017-12-30 18:08 1.0.0 MERTed && --beam-threshold 32 0.10135 0.11540 0.10147
115 MSz 2017-12-30 17:17 1.0.0 MERTed && --beam-threshold 64 moses 0.10064 0.11469 0.10065
74 MF 2017-12-20 15:48 1.0.0 0.17542 0.19777 0.17413
75 MF 2017-12-19 21:42 1.0.0 N/A 0.19777 0.17413
89 kaczla 2017-12-19 19:23 1.0.0 Moses baseline on 10% utterances moses 0.17528 0.19827 0.17356
80 kaczla 2017-12-19 19:07 1.0.0 Moses baseline on 10% utterances (stack 400, search-algorithm 1 = cube pruning) moses 0.17560 0.19849 0.17378
76 kaczla 2017-12-19 18:57 1.0.0 Moses baseline on 10% utterances (search-algorithm 1 = cube pruning) moses 0.17550 0.19842 0.17413
93 kaczla 2017-12-19 18:11 1.0.0 Moses baseline on 10% utterances (stack 1000) moses 0.17540 0.19821 0.17328
92 kaczla 2017-12-19 16:17 1.0.0 Moses baseline on 10% utterances (stack 400) moses 0.17563 0.19837 0.17333
83 [anonymised] 2017-12-06 12:05 1.0.0 TAU-2017-16 - baseline with the probing multiplier parameter p in build_binary program changed moses 0.17543 0.19844 0.17366
22 Weronika 2017-12-05 20:10 1.0.0 Add 5G monolingual data + MERT v2 mert moses 0.21253 0.23071 0.20213
31 Weronika 2017-12-04 13:57 1.0.0 Add 5G monolingual data + MERT v1 mert moses 0.19765 0.22497 0.19174
36 [anonymised] 2017-11-30 18:42 1.0.0 TAU-2017-06 - add dictionary extracted from dict.cc to corpus only; clean corpus moses 0.18117 0.20689 0.18378
49 MF 2017-11-29 14:02 1.0.0 Mert - part of training set mert moses 0.18692 0.21585 0.18183
45 [anonymised] 2017-11-29 12:57 1.0.0 TAU-2017-06 - add dictionary extracted from dict.cc to corpus and model language moses 0.18173 0.20540 0.18324
78 kaczla 2017-11-29 11:57 1.0.0 Moses baseline on 10% utterances (6-gram model) 0.17542 0.19881 0.17380
79 kaczla 2017-11-29 11:54 1.0.0 Moses baseline on 10% utterances (6-gram model) moses 0.17542 0.19881 0.17380
86 kaczla 2017-11-29 11:50 1.0.0 Moses baseline on 10% utterances (5-gram model - trie data structure) moses 0.17536 0.19835 0.17358
87 kaczla 2017-11-29 11:48 1.0.0 Moses baseline on 10% utterances moses 0.17536 0.19835 0.17358
11 kaczla 2017-11-29 11:40 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning) - without mert 0.24350 0.28709 0.24384
8 kaczla 2017-11-29 11:37 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) 0.23935 0.28978 0.25245
6 kaczla 2017-11-29 11:33 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 9 iteration MERT - weights no change) 0.25819 0.29095 0.25513
34 MF 2017-11-26 09:13 1.0.0 5G monolingual data moses 0.19149 0.21697 0.18459
57 MSz 2017-11-24 15:45 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
56 MSz 2017-11-24 15:39 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
58 MSz 2017-11-24 15:30 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
149 patrycja 2017-11-22 15:41 1.0.0 Portuguese-english translation (+ dictionary improvement) moses N/A N/A N/A
15 siulkilulki 2017-11-22 15:05 1.0.0 40GB language model ready-made moses 0.23216 0.26505 0.23152
7 kaczla 2017-11-22 12:19 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) - without dev-1 0.23935 N/A 0.25245
14 siulkilulki 2017-11-22 09:06 1.0.0 Added 40GB corpora ready-made moses 0.23216 0.26505 0.23152
99 MF 2017-11-21 16:36 1.0.0 5G LM moses 0.17201 0.19406 0.16658
28 Weronika 2017-11-21 13:39 1.0.0 Add 5G monolingual data moses 0.19762 0.22481 0.19183
102 MF 2017-11-20 19:31 1.0.0 5G LM data 0.16343 0.18296 0.15704
96 MF 2017-11-19 19:27 1.0.0 test moses 0.17422 0.19771 0.17118
128 MSz 2017-11-16 14:35 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.10792 N/A 0.00768
33 kaczla 2017-11-15 11:27 1.0.0 baseline Moses on 10% utterances + 40GB english monolingual data moses 0.18708 0.22068 0.18618
97 Weronika 2017-11-14 23:50 1.0.0 Use 10%, split compounds moses 0.17557 0.19846 0.17017
90 Weronika 2017-11-14 20:57 1.0.0 Moses baseline on 10% utterances v2 moses 0.17557 0.19846 0.17351
73 Weronika 2017-11-14 20:02 1.0.0 Moses baseline on 10% utterances 0.17568 0.19777 0.17413
117 MF 2017-11-13 19:22 1.0.0 MERT tune on a part of training set attempt 2 mert moses 0.09175 0.10364 0.09110
109 Weronika 2017-11-08 12:52 1.0.0 Split compound nouns moses 0.08586 0.09299 0.12383
118 Weronika 2017-11-08 01:24 1.0.0 Moses 0.08586 0.09299 0.08659
16 siulkilulki 2017-11-07 22:37 1.0.0 Moses 100% utterances compact phrase and lexical-tables ready-made moses 0.21408 0.24051 0.21523
148 siulkilulki 2017-11-07 21:26 1.0.0 Merge branch 'master' of ssh://gonito.net/siulkilulki/wmt-2017 ready-made moses 0.12443 N/A N/A
120 MF 2017-11-07 18:36 1.0.0 MERT tune one a part of training set mert moses N/A N/A 0.06720
62 kaczla 2017-11-07 17:40 1.0.0 baseline Moses on 10% utterances + Wikipedia title (with identical titles) and Wiktionary (all translation) moses 0.17806 0.19990 0.17846
63 kaczla 2017-11-06 21:05 1.0.0 baseline Moses on 10% utterances + Wiktionary (all translation) moses 0.17677 0.19965 0.17712
64 kaczla 2017-11-06 18:17 1.0.0 baseline Moses on 10% utterances + Wiktionary (only first translation) moses 0.17552 0.19828 0.17673
68 kaczla 2017-11-02 12:49 1.0.0 baseline Moses on 10% utterances + Wikipedia title (with identical titles) moses 0.17726 0.20027 0.17562
69 kaczla 2017-11-02 07:28 1.0.0 baseline Moses on 10% utterances + Wikipedia title (ignore identical titles) moses 0.17669 0.19682 0.17528
27 kaczla 2017-10-22 20:13 1.0.0 Moses baseline on 50% utterances ready-made moses 0.19606 0.22087 0.19433
126 kaczla 2017-10-21 07:58 1.0.0 Add script for counting words 0.01390 0.01587 0.01474
127 MSz 2017-10-11 17:56 1.0.0 Hope better solution stupid N/A N/A 0.00768
132 MSz 2017-10-11 16:19 1.0.0 my copy-solution stupid N/A N/A 0.00762
147 MSz 2017-10-11 13:15 1.0.0 my brilliant solution2 stupid N/A N/A N/A
146 MSz 2017-10-11 13:05 1.0.0 my brilliant solution2 N/A N/A N/A
135 [anonymised] 2017-10-11 12:49 1.0.0 TAU-2017-01 solution 01 stupid N/A N/A 0.00049
125 kaczla 2017-10-11 11:12 1.0.0 Popular german words stupid 0.01390 0.01587 0.01474
145 MSz 2017-10-11 07:48 1.0.0 my brilliant solution N/A N/A N/A
142 kaczla 2017-10-11 05:40 1.0.0 Popular english words stupid 0.00000 0.00218 0.00000
141 kaczla 2017-10-11 05:36 1.0.0 Popular english words stupid 0.00000 0.00001 0.00000
72 p/tlen 2017-10-11 05:23 1.0.0 Moses baseline on 10% utterances ready-made 0.17568 0.19777 0.17413
71 p/tlen 2017-10-11 05:13 1.0.0 baseline Moses on 10% utterances 0.17568 N/A 0.17413
140 kaczla 2017-10-11 05:11 1.0.0 Popular english words stupid 0.00000 0.00000 0.00000
139 kaczla 2017-10-11 05:05 1.0.0 Popular german words stupid 0.00000 0.00000 0.00000
129 MF 2017-10-10 19:39 1.0.0 translated days stupid 0.00722 N/A 0.00766
131 Weronika 2017-10-09 20:42 1.0.0 test stupid N/A N/A 0.00762
144 Weronika 2017-10-09 20:40 1.0.0 empty output stupid N/A N/A N/A
123 deinonzch 2017-10-09 09:43 1.0.0 my stupid solution stupid 0.03144 N/A 0.02706
143 deinonzch 2017-10-08 17:10 1.0.0 [ ] 0.00000 N/A N/A
136 p/tlen 2017-10-05 06:04 1.0.0 empty output 0.00000 0.00000 0.00000
138 siulkilulki 2017-10-04 15:04 1.0.0 stupid stupid N/A 0.00000 0.00000
133 EmEm 2017-10-04 14:46 1.0.0 stupid solution 2 stupid N/A N/A 0.00762
137 EmEm 2017-10-04 14:42 1.0.0 stupid solution N/A N/A 0.00000
130 [anonymised] 2017-10-03 13:23 1.0.0 just checkin' stupid N/A N/A 0.00762