
If AGI has an okay outcome, will there be an AGI singleton?
Mini
5
Ṁ6482101
25%
chance
1D
1W
1M
ALL
An okay outcome is defined in Eliezer Yudkowsky's market as:
An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This resolves YES if I can easily point to the single AGI that has an okay outcome, and NO otherwise.
Get Ṁ1,000 play money
Related questions
Related questions
Will we get AGI before 2030?
56% chance
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will we get AGI before 2032?
67% chance
Will AGI be a problem before non-G AI?
20% chance
By when will we have AGI?
Will we get AGI before 2026?
4% chance
Will we get AGI before 2026?
7% chance
Will AI create the first AGI?
41% chance
Will we get AGI before 2048?
87% chance
When artificial general intelligence (AGI) exists, what will be true?