WMT2017 Czech-English machine translation challenge for news
Translate news articles from Czech into English. [ver. 1.0.0]
This is a long list of all submissions, if you want to see only the best, click leaderboard.
# | submitter | when | ver. | description | dev-0 GLEU | dev-0 BLEU | test-A GLEU | test-A BLEU | |
---|---|---|---|---|---|---|---|---|---|
104 | [anonymized] | 2021-02-20 14:54 | 1.0.0 | test for ev fairseq m2m-100 just-inference | 0.0179 | 0.0000 | 0.0292 | 0.0075 | |
155 | [anonymized] | 2021-02-18 18:35 | 1.0.0 | files + solution + out | 0.0179 | 0.0000 | N/A | N/A | |
120 | [anonymized] | 2021-02-05 09:22 | 1.0.0 | Add results. pytorch-nn gru | 0.0183 | 0.0000 | 0.0182 | 0.0004 | |
137 | [anonymized] | 2021-02-03 20:59 | 1.0.0 | cz-en | 0.0277 | 0.0016 | 0.0288 | 0.0000 | |
136 | [anonymized] | 2021-02-03 17:38 | 1.0.0 | gru_cz pytorch-nn gru | 0.0175 | 0.0007 | 0.0170 | 0.0000 | |
135 | [anonymized] | 2021-01-30 08:48 | 1.0.0 | Add retrained model. pytorch-nn gru | N/A | N/A | 0.0204 | 0.0000 | |
134 | [anonymized] | 2021-01-29 08:34 | 1.0.0 | Add result. | N/A | N/A | 0.0186 | 0.0000 | |
118 | [anonymized] | 2021-01-27 04:55 | 1.0.0 | v2.1 lstm pytorch-nn | 0.0197 | 0.0007 | 0.0192 | 0.0009 | |
154 | [anonymized] | 2021-01-27 03:00 | 1.0.0 | lstm lstm pytorch-nn | N/A | N/A | N/A | N/A | |
133 | [anonymized] | 2021-01-27 01:53 | 1.0.0 | v1.2 | 0.0100 | 0.0000 | 0.0095 | 0.0000 | |
123 | [anonymized] | 2021-01-26 01:34 | 1.0.0 | v1.1 | 0.0112 | 0.0000 | 0.0115 | 0.0000 | |
72 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=10000 moses | N/A | N/A | 0.1206 | 0.0565 | |
71 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=1000 moses | N/A | N/A | 0.1206 | 0.0565 | |
67 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=50000 moses | N/A | N/A | 0.1449 | 0.0822 | |
64 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=100000 moses | N/A | N/A | 0.1544 | 0.0943 | |
60 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=198842 moses | N/A | N/A | 0.1639 | 0.1044 | |
59 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=200000 moses | N/A | N/A | 0.1639 | 0.1045 | |
58 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=500000 moses | N/A | N/A | 0.1647 | 0.1057 | |
57 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=400000 moses | N/A | N/A | 0.1647 | 0.1057 | |
56 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=300000 moses | N/A | N/A | 0.1647 | 0.1057 | |
55 | [anonymized] | 2020-01-29 07:51 | 1.0.0 | Moses czech to english with 10 corpus sizes trainsize=250000 moses | N/A | N/A | 0.1647 | 0.1057 | |
48 | [anonymized] | 2020-01-27 09:33 | 1.0.0 | TAU_28 Marian s2s, tokenizer, truecaser marian | 0.2100 | 0.1592 | 0.1823 | 0.1298 | |
22 | [anonymized] | 2019-12-30 15:02 | 1.0.0 | Runed an available model marian | 0.2985 | 0.2569 | 0.2918 | 0.2546 | |
63 | [anonymized] | 2019-11-20 09:44 | 1.0.0 | Add dev-0 results moses | 0.1697 | 0.1039 | 0.1576 | 0.0958 | |
62 | [anonymized] | 2019-11-13 23:05 | 1.0.0 | moses fixed solution moses | 0.2541 | 0.2042 | 0.1576 | 0.0958 | |
85 | [anonymized] | 2019-11-12 22:51 | 1.0.0 | simple phrase-based system with self-made GROW-DIAG-FINAL, phrase extraction algorithm (Philipp Koehn) and very, very simple decoder self-made | 0.0698 | 0.0173 | 0.0657 | 0.0159 | |
115 | [anonymized] | 2019-11-06 09:24 | 1.0.0 | basic solution (update missing code chunk) self-made word-level | 0.0274 | 0.0030 | 0.0261 | 0.0032 | |
92 | [anonymized] | 2019-11-06 08:25 | 1.0.0 | Solution of IBM methode1 self-made word-level | 0.0641 | 0.0115 | 0.0452 | 0.0141 | |
91 | [anonymized] | 2019-11-06 08:23 | 1.0.0 | Test solution IBM model1 with test-A | 0.0641 | 0.0115 | 0.0452 | 0.0141 | |
153 | [anonymized] | 2019-11-06 08:20 | 1.0.0 | Test solution IBM model1 | 0.0641 | 0.0115 | N/A | N/A | |
114 | [anonymized] | 2019-11-05 23:00 | 1.0.0 | basic solution self-made word-level | 0.0274 | 0.0030 | 0.0261 | 0.0032 | |
94 | [anonymized] | 2019-11-05 19:42 | 1.0.0 | 300 lines 30 iterations self-made java word-level | 0.0464 | 0.0177 | 0.0397 | 0.0134 | |
68 | Artur Nowakowski | 2019-11-04 15:41 | 1.0.0 | GIZA++ IBM Model 1 word-level | 0.1727 | 0.0911 | 0.1473 | 0.0700 | |
80 | [anonymized] | 2019-11-02 09:56 | 1.0.0 | now its ok, but previous was strange self-made word-level | 0.0800 | 0.0248 | 0.0626 | 0.0189 | |
152 | [anonymized] | 2019-11-02 09:49 | 1.0.0 | my local geval thinks that out files have too many lines | N/A | N/A | N/A | N/A | |
97 | [anonymized] | 2019-11-02 07:06 | 1.0.0 | Second attempt self-made word-level | 0.0487 | 0.0150 | 0.0415 | 0.0114 | |
95 | [anonymized] | 2019-11-01 14:06 | 1.0.0 | Another try with IBM MODEL 1 self-made word-level | 0.0760 | 0.0151 | 0.0685 | 0.0127 | |
70 | [anonymized] | 2019-10-31 13:13 | 1.0.0 | use giza++ for word alignment word-level | 0.1441 | 0.0730 | 0.1273 | 0.0614 | |
151 | [anonymized] | 2019-10-29 22:48 | 1.0.0 | final stupid solution Y stupid | 0.0219 | 0.0018 | N/A | N/A | |
150 | [anonymized] | 2019-10-29 22:25 | 1.0.0 | final stupid solution | 0.0188 | 0.0000 | N/A | N/A | |
102 | [anonymized] | 2019-10-29 17:59 | 1.0.0 | First attempt self-made word-level | 0.0540 | 0.0139 | 0.0456 | 0.0099 | |
27 | [anonymized] | 2019-10-29 08:07 | 1.0.0 | Moses with MERT tuning moses | 0.2541 | 0.2041 | 0.2071 | 0.1542 | |
117 | [anonymized] | 2019-10-23 12:01 | 1.0.0 | add main.py | N/A | N/A | 0.0490 | 0.0031 | |
116 | [anonymized] | 2019-10-23 08:54 | 1.0.0 | stupid solution stupid | N/A | N/A | 0.0490 | 0.0031 | |
76 | [anonymized] | 2019-10-23 08:19 | 1.0.0 | Solution stupid | N/A | N/A | 0.0846 | 0.0329 | |
26 | Artur Nowakowski | 2019-10-23 08:13 | 1.0.0 | Moses with MERT tuning moses | 0.2443 | 0.1929 | 0.2191 | 0.1675 | |
81 | [anonymized] | 2019-10-22 22:25 | 1.0.0 | stupid solution stupid | 0.0440 | 0.0080 | 0.0481 | 0.0184 | |
105 | [anonymized] | 2019-10-22 20:42 | 1.0.0 | example stupid stupid | 0.0489 | 0.0075 | 0.0423 | 0.0068 | |
119 | [anonymized] | 2019-10-22 18:22 | 1.0.0 | 300 most common tetragrams stupid java | 0.0204 | 0.0010 | 0.0190 | 0.0008 | |
112 | [anonymized] | 2019-10-22 15:03 | 1.0.0 | sialala one more stupid solution stupid | 0.0351 | 0.0032 | 0.0352 | 0.0038 | |
132 | [anonymized] | 2019-10-22 14:29 | 1.0.0 | sialala stupid solution | 0.0424 | 0.0017 | 0.0389 | 0.0000 | |
106 | [anonymized] | 2019-10-22 11:01 | 1.0.0 | czech - eng translation stupid | 0.0446 | 0.0074 | 0.0364 | 0.0058 | |
90 | [anonymized] | 2019-10-22 09:38 | 1.0.0 | cp in.tsv out.tsv stupid | 0.0464 | 0.0175 | 0.0420 | 0.0151 | |
93 | [anonymized] | 2019-10-21 21:13 | 1.0.0 | tlumaczenie stupid | N/A | N/A | 0.0603 | 0.0141 | |
113 | [anonymized] | 2019-10-21 10:27 | 1.0.0 | Stupid solution stupid | 0.0308 | 0.0041 | 0.0258 | 0.0037 | |
110 | [anonymized] | 2019-10-20 19:52 | 1.0.0 | first solution stupid | 0.0302 | 0.0081 | 0.0285 | 0.0040 | |
121 | [anonymized] | 2019-10-20 18:23 | 1.0.0 | stupid solution stupid | 0.0166 | 0.0006 | 0.0145 | 0.0004 | |
131 | [anonymized] | 2019-10-19 22:56 | 1.0.0 | stuuupid plus common words | 0.0395 | 0.0015 | 0.0364 | 0.0000 | |
130 | [anonymized] | 2019-10-19 20:58 | 1.0.0 | no potato | 0.0372 | 0.0014 | 0.0364 | 0.0000 | |
129 | [anonymized] | 2019-10-19 20:48 | 1.0.0 | stuuupid solution | 0.0372 | 0.0014 | 0.0343 | 0.0000 | |
79 | [anonymized] | 2019-10-19 17:38 | 1.0.0 | My stupid solution stupid | 0.0802 | 0.0255 | 0.0724 | 0.0217 | |
82 | [anonymized] | 2019-10-18 12:10 | 1.0.0 | TAU2019-001 stupid | 0.0548 | 0.0209 | 0.0474 | 0.0178 | |
107 | [anonymized] | 2019-10-17 12:40 | 1.0.0 | Simple task with non zero BLEU stupid | 0.0407 | 0.0072 | 0.0327 | 0.0056 | |
108 | [anonymized] | 2019-10-16 10:49 | 1.0.0 | stupid best 4-gram sentence stupid | 0.0365 | 0.0095 | 0.0327 | 0.0056 | |
149 | [anonymized] | 2019-10-16 10:44 | 1.0.0 | 4-gram sentences | 0.0365 | 0.0095 | N/A | N/A | |
2 | [anonymized] | 2019-10-15 21:03 | 1.0.0 | lab1 - substitute existing | 0.0639 | 0.0198 | 0.3373 | 0.3095 | |
1 | Artur Nowakowski | 2019-10-15 13:19 | 1.0.0 | Substitute solution existing | 0.0631 | 0.0196 | 0.3373 | 0.3095 | |
21 | [anonymized] | 2019-10-14 12:08 | 1.0.0 | Regex solution | 0.0691 | 0.0292 | 0.3067 | 0.2689 | |
28 | [anonymized] | 2019-02-08 09:05 | 1.0.0 | TAU-2018-020 marian | 0.0507 | 0.0273 | 0.1886 | 0.1435 | |
128 | [anonymized] | 2019-02-08 09:03 | 1.0.0 | TAU-2018-016 neural-network word-level | 0.0225 | 0.0009 | 0.0221 | 0.0000 | |
49 | [anonymized] | 2019-02-07 22:09 | 1.0.0 | a-an try v4 | 0.1944 | 0.1452 | 0.1696 | 0.1212 | |
53 | [anonymized] | 2019-02-07 21:50 | 1.0.0 | a-an v3 | 0.1936 | 0.1444 | 0.1689 | 0.1205 | |
51 | [anonymized] | 2019-02-07 21:47 | 1.0.0 | a-an v2 | 0.1944 | 0.1452 | 0.1696 | 0.1212 | |
52 | [anonymized] | 2019-02-07 21:26 | 1.0.0 | try X | 0.1937 | 0.1446 | 0.1689 | 0.1206 | |
41 | [anonymized] | 2019-02-05 20:27 | 1.0.0 | Cheap fixed translations | 0.2142 | 0.1689 | 0.1880 | 0.1425 | |
42 | [anonymized] | 2019-02-05 20:17 | 1.0.0 | Cheap fixed copying and translation | 0.2142 | 0.1689 | 0.1880 | 0.1425 | |
40 | [anonymized] | 2019-02-05 19:56 | 1.0.0 | Add -year-old for test-A as well | 0.2142 | 0.1689 | 0.1879 | 0.1426 | |
43 | [anonymized] | 2019-02-05 19:49 | 1.0.0 | Add manual female surnames copying | 0.2140 | 0.1687 | 0.1875 | 0.1419 | |
29 | [anonymized] | 2019-02-05 19:09 | 1.0.0 | Post process Marian TAU-2018-020 marian | 0.2142 | 0.1689 | 0.1880 | 0.1426 | |
50 | [anonymized] | 2019-02-04 22:46 | 1.0.0 | a / an try marian | 0.1944 | 0.1452 | 0.1696 | 0.1212 | |
34 | [anonymized] | 2019-02-04 22:20 | 1.0.0 | Fix often faulty Rio Summer Olympics translation | 0.2142 | 0.1689 | 0.1879 | 0.1426 | |
30 | [anonymized] | 2019-02-04 22:14 | 1.0.0 | Add one-off fixes for weird translations | 0.2142 | 0.1689 | 0.1880 | 0.1426 | |
33 | [anonymized] | 2019-02-04 21:24 | 1.0.0 | Fix dollar <space $ space> | 0.2141 | 0.1688 | 0.1879 | 0.1426 | |
32 | [anonymized] | 2019-02-04 20:08 | 1.0.0 | More interpunction removed | 0.2141 | 0.1688 | 0.1879 | 0.1426 | |
31 | [anonymized] | 2019-02-03 21:13 | 1.0.0 | Replace " <interpunction>" with "<interpunction>" | 0.2141 | 0.1688 | 0.1879 | 0.1426 | |
39 | [anonymized] | 2019-02-03 19:16 | 1.0.0 | marian baseline epoch 5 | 0.2141 | 0.1687 | 0.1879 | 0.1426 | |
38 | [anonymized] | 2019-01-30 10:20 | 1.0.0 | wfwbfei | 0.2142 | 0.1688 | 0.1879 | 0.1426 | |
37 | [anonymized] | 2019-01-30 10:15 | 1.0.0 | 3 | 0.2142 | 0.1688 | 0.1879 | 0.1426 | |
148 | [anonymized] | 2019-01-30 10:04 | 1.0.0 | next | N/A | N/A | N/A | N/A | |
147 | [anonymized] | 2019-01-30 10:03 | 1.0.0 | 2 | N/A | N/A | N/A | N/A | |
36 | [anonymized] | 2019-01-30 09:52 | 1.0.0 | full files marian | 0.2141 | 0.1688 | 0.1879 | 0.1426 | |
146 | [anonymized] | 2019-01-30 09:46 | 1.0.0 | marian | N/A | N/A | N/A | N/A | |
145 | [anonymized] | 2019-01-30 09:44 | 1.0.0 | my try | N/A | N/A | N/A | N/A | |
35 | p/tlen | 2019-01-27 16:13 | 1.0.0 | Marian 5 epochs marian | 0.2141 | 0.1688 | 0.1879 | 0.1426 | |
144 | [anonymized] | 2019-01-23 07:56 | 1.0.0 | beginning neural-network word-level | N/A | N/A | N/A | N/A | |
73 | [anonymized] | 2019-01-09 10:30 | 1.0.0 | ex 03 existing | 0.1370 | 0.0592 | 0.1252 | 0.0509 | |
83 | [anonymized] | 2019-01-09 09:20 | 1.0.0 | merge and ex01 simple non-zero | 0.0504 | 0.0198 | 0.0437 | 0.0163 | |
143 | [anonymized] | 2019-01-02 08:40 | 1.0.0 | neural network translate by words - Dawid Kubicki neural-network word-level | 0.0548 | 0.0127 | N/A | N/A | |
54 | [anonymized] | 2018-11-07 11:15 | 1.0.0 | Moses europarl tuned | 0.1887 | 0.1295 | 0.1671 | 0.1079 | |
84 | [anonymized] | 2018-11-07 09:39 | 1.0.0 | GEVAL from IBM Model 1 self-made | 0.0450 | 0.0000 | 0.0805 | 0.0162 | |
142 | [anonymized] | 2018-11-07 09:08 | 1.0.0 | tau-003-improve | 0.0586 | 0.0163 | N/A | N/A | |
66 | [anonymized] | 2018-11-07 00:04 | 1.0.0 | moses moses | 0.1658 | 0.1002 | 0.1505 | 0.0871 | |
69 | [anonymized] | 2018-11-06 20:13 | 1.0.0 | Self made fast align self-made | 0.1619 | 0.0795 | 0.1473 | 0.0662 | |
5 | [anonymized] | 2018-11-06 20:12 | 1.0.0 | translation IBM MODEL 1 files self-made existing | 0.0450 | 0.0000 | 0.3324 | 0.3039 | |
61 | [anonymized] | 2018-11-06 18:31 | 1.0.0 | Moses europarl no-tune moses | 0.1864 | 0.1260 | 0.1632 | 0.1030 | |
4 | [anonymized] | 2018-11-06 18:26 | 1.0.0 | solution existing | 0.0586 | 0.0163 | 0.3324 | 0.3039 | |
47 | [anonymized] | 2018-10-31 18:58 | 1.0.0 | moses solution moses | 0.2140 | 0.1525 | 0.1911 | 0.1322 | |
46 | [anonymized] | 2018-10-31 18:46 | 1.0.0 | Moses | 0.2140 | 0.1525 | 0.1911 | 0.1322 | |
44 | [anonymized] | 2018-10-31 10:58 | 1.0.0 | moses solution moses | 0.2221 | 0.1640 | 0.1979 | 0.1393 | |
11 | [anonymized] | 2018-10-30 11:21 | 1.0.0 | improve online-B existing | 0.0645 | 0.0258 | 0.3079 | 0.2704 | |
3 | [anonymized] | 2018-10-29 15:56 | 1.0.0 | Simple regex improvement TAU-2018-003.py existing | 0.0450 | 0.0000 | 0.3324 | 0.3039 | |
45 | [anonymized] | 2018-10-27 12:09 | 1.0.0 | Moses v1 moses | 0.2032 | 0.1428 | 0.1911 | 0.1323 | |
20 | [anonymized] | 2018-10-24 09:07 | 1.0.0 | online-B existing | N/A | N/A | 0.3067 | 0.2689 | |
24 | [anonymized] | 2018-10-24 09:03 | 1.0.0 | ex02 existing | N/A | N/A | 0.2862 | 0.2465 | |
19 | [anonymized] | 2018-10-24 08:22 | 1.0.0 | TAU 002 existing | 0.0586 | 0.0163 | 0.3067 | 0.2689 | |
18 | [anonymized] | 2018-10-24 08:16 | 1.0.0 | 002 existing | 0.0645 | 0.0258 | 0.3067 | 0.2689 | |
17 | [anonymized] | 2018-10-24 08:15 | 1.0.0 | TAU-2018-002 existing | 0.0645 | 0.0258 | 0.3067 | 0.2689 | |
10 | [anonymized] | 2018-10-24 08:12 | 1.0.0 | uedin dev existing | N/A | N/A | 0.3324 | 0.3038 | |
9 | [anonymized] | 2018-10-23 16:14 | 1.0.0 | uedin existing | 0.0586 | 0.0163 | 0.3324 | 0.3038 | |
25 | [anonymized] | 2018-10-23 16:10 | 1.0.0 | PJATK existing | 0.0586 | 0.0163 | 0.2739 | 0.2324 | |
16 | [anonymized] | 2018-10-23 14:25 | 1.0.0 | TAU-2018-002 existing | N/A | N/A | 0.3067 | 0.2689 | |
23 | [anonymized] | 2018-10-23 12:56 | 1.0.0 | Online-A existing | 0.1259 | 0.0502 | 0.2862 | 0.2465 | |
8 | [anonymized] | 2018-10-22 12:13 | 1.0.0 | uedin existing | 0.0709 | 0.0272 | 0.3324 | 0.3038 | |
15 | [anonymized] | 2018-10-17 20:00 | 1.0.0 | TAU 002 - OnlineB existing | 0.1421 | 0.1451 | 0.3067 | 0.2689 | |
14 | [anonymized] | 2018-10-17 15:39 | 1.0.0 | online-B solution existing | 0.0645 | 0.0258 | 0.3067 | 0.2689 | |
13 | [anonymized] | 2018-10-17 10:22 | 1.0.0 | zadanie Dawid Kubicki existing | N/A | N/A | 0.3067 | 0.2689 | |
141 | [anonymized] | 2018-10-17 10:13 | 1.0.0 | zadanie Dawid Kubicki | N/A | N/A | N/A | N/A | |
12 | [anonymized] | 2018-10-17 09:27 | 1.0.0 | zadanie existing | N/A | N/A | 0.3067 | 0.2689 | |
7 | [anonymized] | 2018-10-17 09:17 | 1.0.0 | rozwiazanie uedin-nmt existing | 0.0450 | 0.0000 | 0.3324 | 0.3038 | |
6 | [anonymized] | 2018-10-17 09:15 | 1.0.0 | Uedin-nmt existing | N/A | N/A | 0.3324 | 0.3038 | |
77 | [anonymized] | 2018-10-17 08:32 | 1.0.0 | third commit simple non-zero | 0.0709 | 0.0272 | 0.0657 | 0.0238 | |
89 | [anonymized] | 2018-10-17 08:27 | 1.0.0 | second commit simple non-zero | 0.0464 | 0.0175 | 0.0420 | 0.0151 | |
75 | [anonymized] | 2018-10-17 08:10 | 1.0.0 | Dodane src simple non-zero | 0.1259 | 0.0502 | 0.1158 | 0.0434 | |
88 | [anonymized] | 2018-10-17 08:01 | 1.0.0 | first commit simple non-zero | 0.0464 | 0.0175 | 0.0420 | 0.0151 | |
74 | [anonymized] | 2018-10-16 19:53 | 1.0.0 | Auto word dictionary simple non-zero | 0.1259 | 0.0502 | 0.1158 | 0.0434 | |
78 | [anonymized] | 2018-10-16 16:08 | 1.0.0 | simple solution simple non-zero | 0.0645 | 0.0258 | 0.0590 | 0.0223 | |
98 | [anonymized] | 2018-10-16 14:04 | 1.0.0 | in place replacement | 0.0450 | 0.0000 | 0.0301 | 0.0114 | |
101 | [anonymized] | 2018-10-16 13:55 | 1.0.0 | +Common words -special names | 0.0450 | 0.0000 | 0.0333 | 0.0107 | |
86 | [anonymized] | 2018-10-16 13:36 | 1.0.0 | my solution simple non-zero | 0.0586 | 0.0163 | 0.0551 | 0.0157 | |
109 | [anonymized] | 2018-10-16 11:41 | 1.0.0 | special names without odmiany | 0.0450 | 0.0000 | 0.0282 | 0.0048 | |
111 | [anonymized] | 2018-10-16 11:36 | 1.0.0 | own_names | 0.0450 | 0.0000 | 0.0271 | 0.0038 | |
99 | [anonymized] | 2018-10-16 11:30 | 1.0.0 | three letters acronyms | 0.0450 | 0.0000 | 0.0338 | 0.0111 | |
100 | [anonymized] | 2018-10-16 11:28 | 1.0.0 | acronym 2+ | 0.0450 | 0.0000 | 0.0335 | 0.0108 | |
103 | [anonymized] | 2018-10-16 10:09 | 1.0.0 | +Ancronyms | 0.0450 | 0.0000 | 0.0310 | 0.0084 | |
96 | [anonymized] | 2018-10-16 10:05 | 1.0.0 | numbers + copypaste attack | 0.0450 | 0.0000 | 0.0352 | 0.0119 | |
140 | [anonymized] | 2018-10-15 18:16 | 1.0.0 | ? | 0.0464 | 0.0175 | N/A | N/A | |
127 | [anonymized] | 2018-10-15 18:14 | 1.0.0 | frequency attack using average wordcount | N/A | N/A | 0.0458 | 0.0000 | |
126 | [anonymized] | 2018-10-15 18:09 | 1.0.0 | 7 words actual file | N/A | N/A | 0.0400 | 0.0000 | |
125 | [anonymized] | 2018-10-15 18:08 | 1.0.0 | 7 words | N/A | N/A | 0.0342 | 0.0000 | |
124 | [anonymized] | 2018-10-15 17:42 | 1.0.0 | frequency 5 word attack | N/A | N/A | 0.0342 | 0.0000 | |
122 | [anonymized] | 2018-10-15 16:42 | 1.0.0 | frequency attack | N/A | N/A | 0.0195 | 0.0000 | |
87 | [anonymized] | 2018-10-15 16:03 | 1.0.0 | keep testing for too few lines simple non-zero | N/A | N/A | 0.0420 | 0.0151 | |
139 | [anonymized] | 2018-10-15 16:02 | 1.0.0 | test submission A | N/A | N/A | N/A | N/A | |
138 | [anonymized] | 2018-10-15 15:58 | 1.0.0 | test submission | N/A | N/A | N/A | N/A | |
65 | [anonymized] | 2018-10-14 22:17 | 1.0.0 | First solution simple non-zero | 0.1421 | 0.1451 | 0.0893 | 0.0877 |