Will there be a more sample-efficient pretraining algorithm than next token prediction for NLP before 2027?
9
Ṁ593
2027
42%
chance

Will a pretraining algorithm for language models which meaningfully improves on the sample efficiency of next token prediction be widely known before 2027?

Some details:

  • The technique must involve self-supervised learning on unlabeled data

  • The technique must have documented scaling properties which meaningfully outperform next token prediction in test perplexity with respect to data, for whichever model architectures are popular by 2027

    • It's fine if there are tradeoffs with compute efficiency

    • It's fine if next token prediction outperforms the new technique early in training, or for small training runs, as long as scaling trends predict that the new technique would be better on runs using at least 10^26 FLOP and 15T tokens (roughly the budget of Llama 3 400B)

  • It must be accepted within the ML community that the technique is broadly superior to next token prediction (even if there are some tradeoffs) and has the potential to scale to outperform the best prior models trained using next token prediction

  • To validate the scaling potential of the method, it must be used to train a model which qualitatively matches or exceeds GPT-4 (if the above conditions hold before 2027, I will wait until July 2027 for such a model and will resolve YES if one is produced)

Get Ṁ1,000 play money
Sort by:

Define token. What about latent space tokens?

@JohnCarpenter Tokens are discretizations of data, generally text but possibly other modalities. In autoregressive language modeling, the goal is to produce probability distributions over token sequences. Currently, most tokenizers produce 0.75 tokens per word, but this might change. I’m not referring to latent space tokens. When I mention “15T” tokens in the resolution criteria, I’ll specify that this refers to an equivalent amount of unlabeled training data, even if tokenization methods change.

bought Ṁ30 YES

What about latent diffusion language models?

@RemNi My understanding of the method is that an autoencoder is pretrained and diffusion is used to model the latent space. I’m not familiar with how the technique can be used for autoregressive language modeling, but if there’s a way that works and its scaling is borne out in the manner required by the question, then it would count.