I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
12
Ṁ2722026
59%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
Get Ṁ1,000 play money
Related questions
Related questions
Will Dan Hendrycks believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
43% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
53% chance
Will I (co)write an AI safety research paper by the end of 2024?
45% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
35% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
23% chance
Will there be a coherent AI safety movement with leaders and an agenda in May 2029?
76% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
43% chance
Will there be serious AI safety drama at Meta AI before 2026?
59% chance