WMT2017 German-English machine translation challenge for news
Translate news articles from German into English. [ver. 1.0.0]
This is a long list of all submissions, if you want to see only the best, click leaderboard.
# | submitter | when | ver. | description | dev-0 BLEU | dev-1 BLEU | test-A BLEU | |
---|---|---|---|---|---|---|---|---|
6 | [anonymized] | 2021-02-17 19:16 | 1.0.0 | result fairseq m2m-100 just-inference | 0.33857 | 0.40306 | 0.35347 | |
5 | [anonymized] | 2021-02-12 19:54 | 1.0.0 | notebook fairseq m2m-100 just-inference | 0.33857 | 0.40306 | 0.35347 | |
4 | [anonymized] | 2021-02-12 19:53 | 1.0.0 | m2m-100 | 0.33857 | 0.40306 | 0.35347 | |
21 | [anonymized] | 2020-01-28 17:56 | 1.0.0 | translation with ready made fairseq transformer.wmt19.de-en v2 fairseq ready-made-model | 0.23399 | 0.27485 | 0.23698 | |
20 | [anonymized] | 2020-01-27 20:40 | 1.0.0 | translation with ready made fairseq transformer.wmt19.de-en fairseq ready-made-model | N/A | 0.27485 | 0.23698 | |
129 | [anonymized] | 2020-01-27 12:19 | 1.0.0 | CNN, sample_size = 5mln, epochs = 5 fairseq train | 0.07706 | 0.09276 | 0.07834 | |
1 | [anonymized] | 2020-01-15 09:58 | 1.0.0 | Poprawienie Tokenizacji istniejacego rozwiazania v3 ready-made fairseq | 0.39610 | 0.47024 | 0.41504 | |
2 | [anonymized] | 2020-01-15 09:29 | 1.0.0 | Poprawienie Tokenizacji istniejacego rozwiazania v2 ready-made fairseq | 0.39264 | 0.46386 | 0.40909 | |
162 | [anonymized] | 2020-01-15 09:15 | 1.0.0 | Poprawienie Tokenizacji istniejacego rozwiazania | N/A | N/A | N/A | |
3 | [anonymized] | 2020-01-14 20:53 | 1.0.0 | fix tokenization of output ready-made fairseq | 0.38549 | 0.45189 | 0.39879 | |
10 | [anonymized] | 2020-01-07 11:35 | 1.0.0 | ready-made Fairseq model fairseq ready-made-model | 0.24760 | 0.31147 | 0.26579 | |
9 | [anonymized] | 2019-12-30 09:59 | 1.0.0 | Runed a ready-made Fairseq model fairseq ready-made-model | 0.24760 | 0.31147 | 0.26579 | |
130 | [anonymized] | 2019-05-22 21:02 | 1.0.0 | marian 100k tg freq 10000 neural-network marian | 0.06805 | 0.07651 | 0.06824 | |
120 | [anonymized] | 2019-05-22 18:44 | 1.0.0 | marian 100k freq 10000 neural-network marian | 0.11676 | 0.13285 | 0.11359 | |
17 | [anonymized] | 2019-05-22 12:01 | 1.0.0 | marian 1M neural-network marian | 0.23935 | 0.27904 | 0.24561 | |
60 | [anonymized] | 2019-05-22 11:47 | 1.0.0 | marian 1M tg neural-network marian | 0.17381 | 0.20079 | 0.18072 | |
22 | [anonymized] | 2019-02-05 11:36 | 1.0.0 | type=s2s, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network | 0.23399 | 0.27282 | 0.23674 | |
23 | [anonymized] | 2019-01-22 17:17 | 1.0.0 | type=amun, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network | 0.23293 | 0.27193 | 0.23254 | |
27 | [anonymized] | 2019-01-12 12:57 | 1.0.0 | corpseLen=590k, valid-freq 10000, early-stopping 5, workspace 3000, postproc sed deescapeSpecialChars detruecase awk sed | 0.20598 | 0.24117 | 0.21002 | |
28 | [anonymized] | 2019-01-12 11:20 | 1.0.0 | corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase awk | 0.20547 | 0.24024 | 0.20857 | |
29 | [anonymized] | 2019-01-11 10:18 | 1.0.0 | corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase | 0.20547 | 0.24024 | 0.20857 | |
35 | [anonymized] | 2019-01-11 10:08 | 1.0.0 | corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars | 0.19604 | 0.23046 | 0.19946 | |
42 | [anonymized] | 2019-01-11 10:06 | 1.0.0 | corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed | 0.18594 | 0.22086 | 0.19142 | |
104 | [anonymized] | 2019-01-10 22:12 | 1.0.0 | corpseLen=590k, valid-freq 10000, early-stopping 5, no postproc | 0.16705 | 0.20042 | 0.17246 | |
118 | [anonymized] | 2019-01-03 22:12 | 1.0.0 | with awk simple postproc on out | 0.13057 | 0.15454 | 0.12454 | |
126 | [anonymized] | 2018-12-09 14:47 | 1.0.0 | Second dell commit | 0.09811 | 0.11533 | 0.09146 | |
11 | [anonymized] | 2018-02-15 00:38 | 1.0.0 | Tensorflow 80k iterations ; beam 4 alpha 0.9 ready-made neural-network | 0.25882 | 0.30561 | 0.26322 | |
12 | [anonymized] | 2018-02-15 00:05 | 1.0.0 | Tensorflow 80k iterations ; beam 3 alpha 0.6 ready-made neural-network | 0.25720 | 0.30334 | 0.26143 | |
13 | [anonymized] | 2018-02-14 23:18 | 1.0.0 | Tensorflow 86k iterations ; beam 3 alpha 0.6 ready-made neural-network | 0.25409 | 0.30126 | 0.25949 | |
18 | [anonymized] | 2018-02-14 11:47 | 1.0.0 | Tensorflow 50k iterations ; beam 20 alpha 0.6 ready-made neural-network | 0.23913 | 0.28295 | 0.24414 | |
40 | [anonymized] | 2018-02-07 11:10 | 1.0.0 | Add 5G data moses | 0.19762 | 0.22481 | 0.19183 | |
39 | [anonymized] | 2018-02-07 11:00 | 1.0.0 | Add 5G data | N/A | 0.22481 | 0.19183 | |
58 | [anonymized] | 2018-02-07 10:51 | 1.0.0 | improve solution -stack 155 moses | 0.18141 | 0.20750 | 0.18236 | |
57 | [anonymized] | 2018-02-07 10:47 | 1.0.0 | Improve sollution -stack 155 moses | 0.18141 | N/A | 0.18236 | |
133 | [anonymized] | 2018-02-04 23:59 | 1.0.0 | 'baseline' moses | 0.02757 | 0.02569 | 0.02823 | |
36 | [anonymized] | 2018-01-31 11:43 | 1.0.0 | corpus=590616, NB_OF_EPOCHS=8, MAX_WORDS=46000 neural-network | 0.18610 | 0.21637 | 0.19461 | |
56 | [anonymized] | 2018-01-24 11:05 | 1.0.0 | improve solution moses | 0.18141 | N/A | 0.18236 | |
113 | p/tlen | 2018-01-17 06:46 | 1.0.0 | NMT with Marian, vocabulary=70K, epochs=7 | 0.14849 | 0.17603 | 0.15308 | |
75 | [anonymized] | 2018-01-16 18:36 | 1.0.0 | --search-algorithm 1 -s 2000 --cube-pruning-pop-limit 2000 --cube-pruning-diversity 100-b 0.1 --minimum-bayes-risk moses | 0.17767 | 0.20160 | 0.17646 | |
110 | p/tlen | 2018-01-15 09:13 | 1.0.0 | NMT trained with Marian on 10%, 5 epochs, 40K dictionary neural-network | 0.15263 | 0.17750 | 0.15966 | |
135 | [anonymized] | 2018-01-14 16:45 | 1.0.0 | 'ibm self-made algo | N/A | N/A | 0.02608 | |
145 | [anonymized] | 2018-01-14 16:33 | 1.0.0 | ibm1 | N/A | N/A | 0.00762 | |
76 | [anonymized] | 2018-01-13 21:39 | 1.0.0 | Baseline 10%, stack 200 beam 0.1 moses | 0.17469 | 0.19716 | 0.17625 | |
105 | [anonymized] | 2018-01-13 21:22 | 1.0.0 | 0.17468 | 0.19761 | 0.17224 | ||
108 | [anonymized] | 2018-01-13 19:21 | 1.0.0 | 0.17009 | 0.19237 | 0.16779 | ||
132 | [anonymized] | 2018-01-13 19:12 | 1.0.0 | 0.06572 | 0.07063 | 0.06317 | ||
7 | p/tlen | 2018-01-09 18:10 | 1.0.0 | WMT16 neural model (decoded with Amun) + de-escape apostrophes neural-network | 0.27932 | 0.33703 | 0.28988 | |
8 | p/tlen | 2018-01-08 21:13 | 1.0.0 | neural model (decoded with Amun) neural-network | 0.27358 | 0.33058 | 0.28454 | |
77 | [anonymized] | 2018-01-08 18:13 | 1.0.0 | 0.17546 | 0.19893 | 0.17588 | ||
92 | [anonymized] | 2018-01-08 18:04 | 1.0.0 | 0.17546 | 0.19893 | 0.17369 | ||
49 | [anonymized] | 2018-01-08 17:57 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -stack 100 | 0.18151 | 0.20738 | 0.18358 | |
45 | [anonymized] | 2018-01-08 16:35 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -stack 150 moses | 0.18138 | 0.20709 | 0.18379 | |
47 | [anonymized] | 2018-01-08 15:45 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -stack 200 | 0.18117 | 0.20689 | 0.18378 | |
51 | [anonymized] | 2018-01-08 15:02 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -stack 2000 | 0.18119 | 0.20659 | 0.18328 | |
54 | [anonymized] | 2018-01-08 14:58 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 -cube-pruning-pop-limit 2000 -cube-pruning-diversity 500 | 0.18131 | 0.20651 | 0.18324 | |
50 | [anonymized] | 2018-01-07 21:47 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 | 0.18135 | 0.20681 | 0.18347 | |
53 | [anonymized] | 2018-01-07 16:29 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -beam-threshold 0.01 | 0.18129 | 0.20673 | 0.18326 | |
52 | [anonymized] | 2018-01-07 13:30 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -stack 1000 | 0.18122 | 0.20661 | 0.18327 | |
48 | [anonymized] | 2018-01-07 13:25 | 1.0.0 | TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 | 0.18098 | 0.20696 | 0.18370 | |
62 | [anonymized] | 2018-01-03 14:57 | 1.0.0 | MERTed && --beam-thresholds moses | 0.17955 | 0.20203 | 0.17971 | |
161 | [anonymized] | 2018-01-03 14:30 | 1.0.0 | TAU-2017-20 - check 6 values for maximum stack size; plot graphs for BLEU and decoding time moses | 0.17544 | 0.19840 | N/A | |
101 | [anonymized] | 2018-01-03 13:30 | 1.0.0 | improve solution search-algorithm 1 -s 0 --cube-pruning-pop-limit 5000 --cube-pruning-diversity 100 | 0.17607 | 0.19839 | 0.17336 | |
91 | [anonymized] | 2018-01-03 09:39 | 1.0.0 | improve solution search-algorithm 1 -s 0 | 0.17598 | 0.19873 | 0.17377 | |
31 | [anonymized] | 2018-01-03 00:06 | 1.0.0 | dev-0 dev-1 test-A -stack 1500 moses | 0.21264 | 0.23117 | 0.20230 | |
30 | [anonymized] | 2018-01-02 22:05 | 1.0.0 | test-A/out.tsv -stack 1500 | 0.21264 | 0.23071 | 0.20230 | |
34 | [anonymized] | 2018-01-02 20:30 | 1.0.0 | dev-0/out.tsv 0.2126 -stack 1500 | 0.21264 | 0.23071 | 0.20213 | |
94 | [anonymized] | 2018-01-02 18:18 | 1.0.0 | improve solution search-algorithm 1 -cube-pruning-pop-limit 2000 -s 2000 | 0.17599 | 0.19856 | 0.17358 | |
114 | [anonymized] | 2018-01-02 17:36 | 1.0.0 | improve solution search-algorithm 1 and beam-threshold 10 | 0.13985 | 0.15876 | 0.13409 | |
116 | [anonymized] | 2018-01-02 17:24 | 1.0.0 | improve solution search-algorithm 1 and beam-threshold 100 | 0.13829 | 0.15653 | 0.13114 | |
115 | [anonymized] | 2018-01-02 17:05 | 1.0.0 | improve solution search-algorithm 1 and beam-threshold 100 | 0.13839 | 0.15655 | 0.13129 | |
87 | [anonymized] | 2018-01-02 16:25 | 1.0.0 | --search-algorithm 1=cube pruning --stack 100 moses | 0.17572 | 0.19862 | 0.17403 | |
98 | [anonymized] | 2018-01-02 16:15 | 1.0.0 | --search-algorithm 0=normal stack -stack 100 moses | 0.17569 | 0.19868 | 0.17357 | |
80 | [anonymized] | 2018-01-02 16:00 | 1.0.0 | --search-algorithm 1=cube pruning | 0.17596 | 0.19845 | 0.17423 | |
95 | [anonymized] | 2018-01-02 15:28 | 1.0.0 | default | 0.17561 | 0.19857 | 0.17358 | |
33 | [anonymized] | 2018-01-02 11:29 | 1.0.0 | dev-0/out.tsv 0.1982 -stack 1500 | 0.19816 | 0.23071 | 0.20213 | |
61 | [anonymized] | 2018-01-01 16:01 | 1.0.0 | slightly improved -beam-threshold 0.25 -stack 152 moses | 0.17955 | 0.20203 | 0.17971 | |
64 | [anonymized] | 2018-01-01 14:12 | 1.0.0 | slightly improved -stack 154 | 0.17951 | 0.20215 | 0.17957 | |
65 | [anonymized] | 2018-01-01 12:22 | 1.0.0 | slightly improved -stack 152 | 0.17951 | 0.20216 | 0.17957 | |
70 | [anonymized] | 2017-12-31 10:12 | 1.0.0 | MERTed && --beam-threshold 0.0625 | 0.17950 | 0.20199 | 0.17956 | |
69 | [anonymized] | 2017-12-31 09:11 | 1.0.0 | MERTed && --beam-threshold 0.125 | 0.17950 | 0.20189 | 0.17956 | |
63 | [anonymized] | 2017-12-31 00:49 | 1.0.0 | MERTed && --beam-threshold 0.25 | 0.17948 | 0.20195 | 0.17965 | |
71 | [anonymized] | 2017-12-31 00:05 | 1.0.0 | MERTed && --beam-threshold 0.5 | 0.17871 | 0.20147 | 0.17916 | |
111 | [anonymized] | 2017-12-30 20:25 | 1.0.0 | MERTed && --beam-threshold 1 | 0.15859 | 0.18110 | 0.15936 | |
117 | [anonymized] | 2017-12-30 20:00 | 1.0.0 | MERTed && --beam-threshold 2 | 0.12788 | 0.15173 | 0.12880 | |
121 | [anonymized] | 2017-12-30 19:29 | 1.0.0 | MERTed && --beam-threshold 4 | 0.11115 | 0.12879 | 0.11071 | |
122 | [anonymized] | 2017-12-30 18:59 | 1.0.0 | MERTed && --beam-threshold 8 | 0.10379 | 0.11938 | 0.10410 | |
123 | [anonymized] | 2017-12-30 18:33 | 1.0.0 | MERTed && --beam-threshold 16 | 0.10225 | 0.11686 | 0.10241 | |
124 | [anonymized] | 2017-12-30 18:08 | 1.0.0 | MERTed && --beam-threshold 32 | 0.10135 | 0.11540 | 0.10147 | |
125 | [anonymized] | 2017-12-30 17:17 | 1.0.0 | MERTed && --beam-threshold 64 moses | 0.10064 | 0.11469 | 0.10065 | |
84 | [anonymized] | 2017-12-20 15:48 | 1.0.0 | 0.17542 | 0.19777 | 0.17413 | ||
85 | [anonymized] | 2017-12-19 21:42 | 1.0.0 | N/A | 0.19777 | 0.17413 | ||
99 | kaczla | 2017-12-19 19:23 | 1.0.0 | Moses baseline on 10% utterances moses | 0.17528 | 0.19827 | 0.17356 | |
90 | kaczla | 2017-12-19 19:07 | 1.0.0 | Moses baseline on 10% utterances (stack 400, search-algorithm 1 = cube pruning) moses | 0.17560 | 0.19849 | 0.17378 | |
86 | kaczla | 2017-12-19 18:57 | 1.0.0 | Moses baseline on 10% utterances (search-algorithm 1 = cube pruning) moses | 0.17550 | 0.19842 | 0.17413 | |
103 | kaczla | 2017-12-19 18:11 | 1.0.0 | Moses baseline on 10% utterances (stack 1000) moses | 0.17540 | 0.19821 | 0.17328 | |
102 | kaczla | 2017-12-19 16:17 | 1.0.0 | Moses baseline on 10% utterances (stack 400) moses | 0.17563 | 0.19837 | 0.17333 | |
93 | [anonymized] | 2017-12-06 12:05 | 1.0.0 | TAU-2017-16 - baseline with the probing multiplier parameter p in build_binary program changed moses | 0.17543 | 0.19844 | 0.17366 | |
32 | [anonymized] | 2017-12-05 20:10 | 1.0.0 | Add 5G monolingual data + MERT v2 mert moses | 0.21253 | 0.23071 | 0.20213 | |
41 | [anonymized] | 2017-12-04 13:57 | 1.0.0 | Add 5G monolingual data + MERT v1 mert moses | 0.19765 | 0.22497 | 0.19174 | |
46 | [anonymized] | 2017-11-30 18:42 | 1.0.0 | TAU-2017-06 - add dictionary extracted from dict.cc to corpus only; clean corpus moses | 0.18117 | 0.20689 | 0.18378 | |
59 | [anonymized] | 2017-11-29 14:02 | 1.0.0 | Mert - part of training set mert moses | 0.18692 | 0.21585 | 0.18183 | |
55 | [anonymized] | 2017-11-29 12:57 | 1.0.0 | TAU-2017-06 - add dictionary extracted from dict.cc to corpus and model language moses | 0.18173 | 0.20540 | 0.18324 | |
88 | kaczla | 2017-11-29 11:57 | 1.0.0 | Moses baseline on 10% utterances (6-gram model) | 0.17542 | 0.19881 | 0.17380 | |
89 | kaczla | 2017-11-29 11:54 | 1.0.0 | Moses baseline on 10% utterances (6-gram model) moses | 0.17542 | 0.19881 | 0.17380 | |
96 | kaczla | 2017-11-29 11:50 | 1.0.0 | Moses baseline on 10% utterances (5-gram model - trie data structure) moses | 0.17536 | 0.19835 | 0.17358 | |
97 | kaczla | 2017-11-29 11:48 | 1.0.0 | Moses baseline on 10% utterances moses | 0.17536 | 0.19835 | 0.17358 | |
19 | kaczla | 2017-11-29 11:40 | 1.0.0 | baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning) - without mert | 0.24350 | 0.28709 | 0.24384 | |
16 | kaczla | 2017-11-29 11:37 | 1.0.0 | baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) | 0.23935 | 0.28978 | 0.25245 | |
14 | kaczla | 2017-11-29 11:33 | 1.0.0 | baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 9 iteration MERT - weights no change) | 0.25819 | 0.29095 | 0.25513 | |
44 | [anonymized] | 2017-11-26 09:13 | 1.0.0 | 5G monolingual data moses | 0.19149 | 0.21697 | 0.18459 | |
67 | [anonymized] | 2017-11-24 15:45 | 1.0.0 | used MERT (tuned and tested on dev-0) mert moses | 0.17950 | 0.20199 | 0.17956 | |
66 | [anonymized] | 2017-11-24 15:39 | 1.0.0 | used MERT (tuned and tested on dev-0) mert moses | 0.17950 | 0.20199 | 0.17956 | |
68 | [anonymized] | 2017-11-24 15:30 | 1.0.0 | used MERT (tuned and tested on dev-0) mert moses | 0.17950 | 0.20199 | 0.17956 | |
160 | [anonymized] | 2017-11-22 15:41 | 1.0.0 | Portuguese-english translation (+ dictionary improvement) moses | N/A | N/A | N/A | |
25 | [anonymized] | 2017-11-22 15:05 | 1.0.0 | 40GB language model ready-made moses | 0.23216 | 0.26505 | 0.23152 | |
15 | kaczla | 2017-11-22 12:19 | 1.0.0 | baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) - without dev-1 | 0.23935 | N/A | 0.25245 | |
24 | [anonymized] | 2017-11-22 09:06 | 1.0.0 | Added 40GB corpora ready-made moses | 0.23216 | 0.26505 | 0.23152 | |
109 | [anonymized] | 2017-11-21 16:36 | 1.0.0 | 5G LM moses | 0.17201 | 0.19406 | 0.16658 | |
38 | [anonymized] | 2017-11-21 13:39 | 1.0.0 | Add 5G monolingual data moses | 0.19762 | 0.22481 | 0.19183 | |
112 | [anonymized] | 2017-11-20 19:31 | 1.0.0 | 5G LM data | 0.16343 | 0.18296 | 0.15704 | |
106 | [anonymized] | 2017-11-19 19:27 | 1.0.0 | test moses | 0.17422 | 0.19771 | 0.17118 | |
139 | [anonymized] | 2017-11-16 14:35 | 1.0.0 | used MERT (tuned and tested on dev-0) mert moses | 0.10792 | N/A | 0.00768 | |
43 | kaczla | 2017-11-15 11:27 | 1.0.0 | baseline Moses on 10% utterances + 40GB english monolingual data moses | 0.18708 | 0.22068 | 0.18618 | |
107 | [anonymized] | 2017-11-14 23:50 | 1.0.0 | Use 10%, split compounds moses | 0.17557 | 0.19846 | 0.17017 | |
100 | [anonymized] | 2017-11-14 20:57 | 1.0.0 | Moses baseline on 10% utterances v2 moses | 0.17557 | 0.19846 | 0.17351 | |
83 | [anonymized] | 2017-11-14 20:02 | 1.0.0 | Moses baseline on 10% utterances | 0.17568 | 0.19777 | 0.17413 | |
127 | [anonymized] | 2017-11-13 19:22 | 1.0.0 | MERT tune on a part of training set attempt 2 mert moses | 0.09175 | 0.10364 | 0.09110 | |
119 | [anonymized] | 2017-11-08 12:52 | 1.0.0 | Split compound nouns moses | 0.08586 | 0.09299 | 0.12383 | |
128 | [anonymized] | 2017-11-08 01:24 | 1.0.0 | Moses | 0.08586 | 0.09299 | 0.08659 | |
26 | [anonymized] | 2017-11-07 22:37 | 1.0.0 | Moses 100% utterances compact phrase and lexical-tables ready-made moses | 0.21408 | 0.24051 | 0.21523 | |
159 | [anonymized] | 2017-11-07 21:26 | 1.0.0 | Merge branch 'master' of ssh://gonito.net/siulkilulki/wmt-2017 ready-made moses | 0.12443 | N/A | N/A | |
131 | [anonymized] | 2017-11-07 18:36 | 1.0.0 | MERT tune one a part of training set mert moses | N/A | N/A | 0.06720 | |
72 | kaczla | 2017-11-07 17:40 | 1.0.0 | baseline Moses on 10% utterances + Wikipedia title (with identical titles) and Wiktionary (all translation) moses | 0.17806 | 0.19990 | 0.17846 | |
73 | kaczla | 2017-11-06 21:05 | 1.0.0 | baseline Moses on 10% utterances + Wiktionary (all translation) moses | 0.17677 | 0.19965 | 0.17712 | |
74 | kaczla | 2017-11-06 18:17 | 1.0.0 | baseline Moses on 10% utterances + Wiktionary (only first translation) moses | 0.17552 | 0.19828 | 0.17673 | |
78 | kaczla | 2017-11-02 12:49 | 1.0.0 | baseline Moses on 10% utterances + Wikipedia title (with identical titles) moses | 0.17726 | 0.20027 | 0.17562 | |
79 | kaczla | 2017-11-02 07:28 | 1.0.0 | baseline Moses on 10% utterances + Wikipedia title (ignore identical titles) moses | 0.17669 | 0.19682 | 0.17528 | |
37 | kaczla | 2017-10-22 20:13 | 1.0.0 | Moses baseline on 50% utterances ready-made moses | 0.19606 | 0.22087 | 0.19433 | |
137 | kaczla | 2017-10-21 07:58 | 1.0.0 | Add script for counting words | 0.01390 | 0.01587 | 0.01474 | |
138 | [anonymized] | 2017-10-11 17:56 | 1.0.0 | Hope better solution stupid | N/A | N/A | 0.00768 | |
143 | [anonymized] | 2017-10-11 16:19 | 1.0.0 | my copy-solution stupid | N/A | N/A | 0.00762 | |
158 | [anonymized] | 2017-10-11 13:15 | 1.0.0 | my brilliant solution2 stupid | N/A | N/A | N/A | |
157 | [anonymized] | 2017-10-11 13:05 | 1.0.0 | my brilliant solution2 | N/A | N/A | N/A | |
146 | [anonymized] | 2017-10-11 12:49 | 1.0.0 | TAU-2017-01 solution 01 stupid | N/A | N/A | 0.00049 | |
136 | kaczla | 2017-10-11 11:12 | 1.0.0 | Popular german words stupid | 0.01390 | 0.01587 | 0.01474 | |
156 | [anonymized] | 2017-10-11 07:48 | 1.0.0 | my brilliant solution | N/A | N/A | N/A | |
153 | kaczla | 2017-10-11 05:40 | 1.0.0 | Popular english words stupid | 0.00000 | 0.00218 | 0.00000 | |
152 | kaczla | 2017-10-11 05:36 | 1.0.0 | Popular english words stupid | 0.00000 | 0.00001 | 0.00000 | |
82 | p/tlen | 2017-10-11 05:23 | 1.0.0 | Moses baseline on 10% utterances ready-made | 0.17568 | 0.19777 | 0.17413 | |
81 | p/tlen | 2017-10-11 05:13 | 1.0.0 | baseline Moses on 10% utterances | 0.17568 | N/A | 0.17413 | |
151 | kaczla | 2017-10-11 05:11 | 1.0.0 | Popular english words stupid | 0.00000 | 0.00000 | 0.00000 | |
150 | kaczla | 2017-10-11 05:05 | 1.0.0 | Popular german words stupid | 0.00000 | 0.00000 | 0.00000 | |
140 | [anonymized] | 2017-10-10 19:39 | 1.0.0 | translated days stupid | 0.00722 | N/A | 0.00766 | |
142 | [anonymized] | 2017-10-09 20:42 | 1.0.0 | test stupid | N/A | N/A | 0.00762 | |
155 | [anonymized] | 2017-10-09 20:40 | 1.0.0 | empty output stupid | N/A | N/A | N/A | |
134 | [anonymized] | 2017-10-09 09:43 | 1.0.0 | my stupid solution stupid | 0.03144 | N/A | 0.02706 | |
154 | [anonymized] | 2017-10-08 17:10 | 1.0.0 | [ ] | 0.00000 | N/A | N/A | |
147 | p/tlen | 2017-10-05 06:04 | 1.0.0 | empty output | 0.00000 | 0.00000 | 0.00000 | |
149 | [anonymized] | 2017-10-04 15:04 | 1.0.0 | stupid stupid | N/A | 0.00000 | 0.00000 | |
144 | [anonymized] | 2017-10-04 14:46 | 1.0.0 | stupid solution 2 stupid | N/A | N/A | 0.00762 | |
148 | [anonymized] | 2017-10-04 14:42 | 1.0.0 | stupid solution | N/A | N/A | 0.00000 | |
141 | [anonymized] | 2017-10-03 13:23 | 1.0.0 | just checkin' stupid | N/A | N/A | 0.00762 |