By EOY 2025, will the model with the lowest perplexity on Common Crawl will not be based on transformers?
31
Ṁ36512025
28%
chance
1D
1W
1M
ALL
If perplexity on Common Crawl is not available for models, I will use other benchmarks as a surrogate. This will inherently be a judgement process. If a model has not been announced by EOY 2025 and no benchmarks have been posted publicly, it will not be counted for the purpose of this market.
"Based on transformers" for the purpose of this question will be anything with multi-headed self-attention that feeds into an MLP.
Get Ṁ1,000 play money
Sort by:
@ConnorMcCormick oh yeah that's definitely confusing people. We'll, better for us who do understand it :)
@jacksonpolack The API only refreshes the data every 15 seconds, so if you're quick on the draw, it's totally doable.
Related questions
Related questions
By the end of Q1 2025 will an open source model beat OpenAI’s o1 model?
44% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
69% chance
By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
23% chance
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
59% chance
By the end of Q2 2025 will an open source model beat OpenAI’s o1 model?
72% chance
Will Adam optimizer no longer be the default optimizer for training the best open source models by the end of 2026?
50% chance
When will a non-Transformer model become the top open source LLM?
Will the most capable, public multimodal model at the end of 2027 in my judgement use a transformer-like architecture?
55% chance
Will transformers still be the dominant DL architecture in 2026?
58% chance
Will openAI have the most accurate LLM across most benchmarks by EOY 2024?
39% chance