Which of these AI Safety Research Futarchy projects will get Conf Accepted, if chosen?
5
แน€218
2026
85%
Goal Crystallisation
37%
Post-training order and CoT Monitorability
34%
Detection game
33%
Exploring more metacognitive capabilities of LLMs
28%
Salient features of self-models
26%
Research sabotage dataset
26%
Model Emulation
24%
Model organisms resisting generalisation
23%
Online Learning for Research Sabotage Mitigation

This is a derivative market of the markets linked to from this post.

For projects that do not get chosen by the futarchy, the corresponding market here will resolve N/A. Otherwise, they resolve according to whether both the "uploaded to arXiv" and "accepted to a top ML conference" resolve YES (if either resolves NO or N/A, they resolve NO).

Be aware the resolving markets N/A isn't always easy, I will do my best to ask for mod assistance if there is trouble.

Get แน€1,000 play money