Will an AI system similar to Auto-GPT make a successful attempt to kill a human by 2030?
34
Ṁ1674
2031
28%
chance

By default resolves NO; the burden of proof is on the YES side.

Get Ṁ1,000 play money
Sort by:

This should be so illegal 💀

What counts as "similar to"? Does it need to be a prompting scaffolding around a GPT-based language model?

@tailcalled I think the main point is that it should arrive at this decision completely autonomously. Even if a specific task was given to the AI by humans, it is not one where killing a human is an expected outcome. E.g. military AI would not count but paperclip producing AI would.