
If Artificial General Intelligence has a poor outcome, what will be the reason?
Mini
10
Ṁ6312030
1D
1W
1M
ALL
85%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
78%
Something from Eliezer's list of lethalities occurs.
62%
Someone successfully aligns AI to cause a poor outcome
5%
Alignment is impossible.
Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.
Will not resolve.
Primarily for users to explore particular lethalities.
Please add responses.
"poor" = human extinction or mass human suffering
Get Ṁ1,000 play money
Related questions
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
❓ If AGI turns out to be a disaster, what will be the main cause?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
If we survive general artificial intelligence, what will be the reason?
If we survive general artificial intelligence before 2100, what will be the reason?
When artificial general intelligence (AGI) exists, what will be true?