Will Inner or Outer AI alignment be considered "mostly solved" first?
Plus
11
Ṁ3142031
Inner56%
1D
1W
1M
ALL
As declared by a majority of the consensus at alignmentforum, slatestarcodex, lesswrong, MIRI, and my opinion.
Get Ṁ1,000 play money
Related questions
Related questions
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will Meta AI start an AGI alignment team before 2026?
9% chance
Will the 1st AGI solve AI Alignment and build an ASI which is aligned with its goals?
17% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
How difficult will Anthropic say the AI alignment problem is?
Will we solve AI alignment by 2026?
2% chance
Will I focus on the AI alignment problem for the rest of my life?
38% chance
Will xAI significantly rework their alignment plan by the start of 2026?
14% chance
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
25% chance