Advertisement · 728 × 90

Posts by Tom Kocmi

- All systems will be human evaluated (no downsampling using automatic metrics) and we are preparing a new contrastive humeval protocol
- LLM benchmarking focussed on open-weight models
- Abstract submission has been replaced with a model card poll

All details are at www2.statmt.org/wmt26/transl...

1 month ago 0 0 0 0

Multimodal context - same as last year, for spoken domain, we provide original video, while for other domains, image can be provided with additional context (such as screenshots or infographics). Purely text-to-text systems can still participate as in the past

1 month ago 0 0 1 0

Instruction following context in prompts. Systems may disregard them but failing to follow instructions is considered a translation error. You can expect the following phenomena: formal/informal voice, glossaries, structured translation (JSON, HTML, ...), style and expressions (e.g. "yuhuuu", "tbh")

1 month ago 0 0 1 0

You may participate in up to 20 language pairs out of which we host 9 new ones:
Czech to Vietnamese
Chinese to Japanese (direction reversed)
EN to Armenian
EN to Belarusian
EN to Indonesian
EN to Kazakh
EN to Ladin
EN to Ligurian
EN to Northern Sámi

1 month ago 0 0 1 0

We'd like to officially announce the 21st iteration of the WMT General Machine Translation shared task and invite you to participate. Here is the list of main changes:

1 month ago 1 0 1 0
Post image

How well do LLMs handle multilinguality? 🌍🤖

🔬We brought the rigor from Machine Translation evaluation to multilingual LLM benchmarking and organized the WMT25 Multilingual Instruction Shared Task spanning 30 languages and 5 subtasks.

5 months ago 3 2 1 0

Ready for our poster today at #COLM2025!

💭This paper has had an interesting journey, come find out and discuss with us! @swetaagrawal.bsky.social @kocmitom.bsky.social

Side note: being a parent in research does have its perks, poster transportation solved ✅

6 months ago 12 1 0 0

This project wouldn’t have been possible without the brilliant minds driving the work: Lorenzo Proietti, @sted19.bsky.social and @zouharvi.bsky.social

7 months ago 3 0 0 0

One way to raise the bar is by rethinking the source selection process: instead of random samples, we built model that chooses the most difficult data for translation. And we’ve already put our work into practice: this year’s WMT25 General MT test set use our approach to make eval more challenging.

7 months ago 2 0 1 0
Advertisement

🚩Machine Translation is far from “solved” - the test sets just got too easy. 🚩

Yes, the systems are much stronger. But the other half of the story is that test sets haven’t kept up. It’s no longer enough to just take a random news article and expect systems to stumble.

7 months ago 6 1 1 0
Preview
Command A Translate: Secure translation for global enterprises The new industry standard for secure, enterprise-ready machine translation.

Oh, and the best part: we’re releasing the weights so researchers can run wild with it. Stay tuned for our upcoming technical report!

cohere.com/blog/command...

7 months ago 0 0 0 0
Post image

🚀 Thrilled to share what I’ve been working on at Cohere!

What began in January as a scribble in my notebook “how challenging would it be...” turned into a fully-fledged translation model that outperforms both open and closed-source systems, including long-standing MT leaders.

7 months ago 5 1 1 0

A correction: we obtained 22 multilingual systems while in contrast we got only 14 bilingual systems, highlighting a shift in the field towards multilinguality.

7 months ago 0 0 0 0

We received 14 specialized systems while 10 are multilingual. And almost all participants finetuned some LLMs.

In contrast to previous years, constrained systems are now reaching top-tier rankings, challenging the dominance of unconstrained ones.

Stay tuned for the 20th anniversary WMT conference.

7 months ago 1 0 1 0

We saw increased momentum in participation growth this year: 36 unique teams competing to improve the performance of MT. Furthermore, we added collected outputs of 24 popular LLMs and online systems. Reaching 50 evaluated systems in our annual benchmark.

7 months ago 3 1 1 0
Preview
Preliminary Ranking of WMT25 General Machine Translation Systems We present the preliminary ranking of the WMT25 General Machine Translation Shared Task, in which MT systems have been evaluated using automatic metrics. As this ranking is based on automatic evaluati...

📊 Preliminary ranking of WMT 2025 General Machine Translation benchmark is here!

But don't draw conclusions just yet - automatic metrics are biased for techniques like metric as a reward model or MBR. The official human ranking will be part of General MT findings at WMT.

arxiv.org/abs/2508.14909

7 months ago 9 4 1 0
Advertisement
WMT 2025

Hey, hey! 🎉 We’ve released the blind test set for this year’s WMT General MT and multilingual instruction tasks. Submit your systems to the special 20th anniversary of the conference and see how you compare to others!
The deadline is next week on 3rd July.
www2.statmt.org/wmt25/

9 months ago 1 0 0 0

Tired of messy non-replicable multilingual LLM evaluation? So were we.

In our new paper, we experimentally illustrate common eval. issues and present how structured evaluation design, transparent reporting, and meta-evaluation can help us to build stronger models.

1 year ago 7 1 0 0

☀️ Summer internship at Cohere!
Are you excited about multilingual evaluation, human judgment, or meta-eval? Come help us explore how a rigorous eval really looks like while questioning the status quo in LLM evaluation.
I’m looking for an intern (EU timezone preferred), are you interested? Ping me!

1 year ago 7 2 2 0

It’s here! Our new model’s technical report is out. I'm especially proud of the work we did on its multilingual capabilities - this was a massive, collective effort!

1 year ago 1 0 0 0
Multilingual Instruction Shared Task

Big news from WMT! 🎉 We are expanding beyond MT and launching a new multilingual instruction shared task. Our goal is to foster truly multilingual LLM evaluation and best practices in automatic and human evaluation. Join us and build the winning multilingual system!
www2.statmt.org/wmt25/multil...

1 year ago 12 7 1 2

AI is evolving fast, and Aya Vision is proof of that. This open-weights model is designed to make LLM more powerful across languages and modalities, especially vision! Can’t wait to see the real-world applications, perhaps at WMT this year 😇

1 year ago 2 0 0 0
Preview
WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, includin...

Huge shoutout to colleagues at Google & Unbabel for extending our WMT24 testset to 55 languages in four domains, this is game changer! 🚀

I really hope it puts the final nail in the coffin of FLORES or WMT14. The field is evolving, legacy testsets can't show your progress

arxiv.org/abs/2502.124...

1 year ago 14 6 0 0
Shared Task: General Machine Translation

* Revamped constrained track – No restrictions on training data except licensing; all open models under 20B parameters are allowed.

* More challenging sources; long-context translation; prompt preambles; and much more.

📌 All details are available at www2.statmt.org/wmt25/transl...

1 year ago 2 0 0 0

* New human-evaluated language pairs: EN–Arabic, EN–Estonian, EN–Korean, EN–Serbian, Czech–German, Bhojpuri–EN, Maasai–EN

* New multilingual subtask – Can you build a system that translates 30 languages?

* New modalities – Additional context from video and image (text-to-text remains the core).

1 year ago 4 0 1 0
Advertisement

Guess what? The jubilee 🎉 20th iteration of WMT General MT 🎉 is here, and we want you to participate - as the entry barrier to make an impact is so low!

This isn’t just any repeat. We’ve kept what worked, removed what was outdated, and introduced many exciting new twists! Among the key changes are:

1 year ago 18 5 1 3

Yeah, I haven't wrote a paper since it's just a different prompt. It's published in the github repository of GEMBA

1 year ago 0 0 1 0

That one is extremely large, but we haven't used it either in the automatic ranking. Unfortunately I'm not aware of any API service for metrics

1 year ago 0 0 1 0

🙏 A huge thank you to all organizers, partners, and participants for making this year's WMT General MT Shared Task a success! Stay tuned for WMT25 - many exciting changes are coming! 🎉

1 year ago 2 0 0 0
Post image

🏆 Highlights from top systems:
✅ IOL-Research: led in constrained/open, winning 10/11 in its category.
✅ Unbabel-Tower70B: Best participant, winning 8/11 pairs.
✅ Claude-3.5-Sonnet: Best overall with 9/11 wins.
✅ Shoutout to Dubformer (speech) & CUNI-MH (strong constrained)

1 year ago 4 0 1 0