
I'll resolve this entirely subjectively. (I'll stay out of betting in this market though.)
Resolves NA if AI safety doesn't become politically divided.
@causal_agency In the US, (one year after you wrote that), X-risk AI safety seems to be disregarded by the conservatives. Is it still a conservative issue in the UK?
It appears that both major US parties are focusing on the implications of AI as a substitute for labor. Biden did roll out some initiatives to study AI risk, but Trump is pulling back. (For example, he rescinded Executive Order 14110 on day 1. (https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/))
@NathanNguyen So if the left is more pro-regulation than the right because of 99% fairness diversity inclusion reasons and 1% x-risk how would you resolve? What if it’s 90-10 or 50-50?
@mariopasquato I’m not sure it makes sense to quantify things in that way. It’s more of a “I know it when I see it” kind of thing.
@NathanNguyen Any idea on how to make this less arbitrary? Where does x-risk end and concerns about jobs and discrimination begin? Right now if you read the famous open letter signed by Bengio, Wozniak etc… would you conclude that the motive is x-risk or just negative economic and social impact?
@Gigacasting that is why they are the ones trying to outlaw medication, right? oh wait, no
@DerkDicerk I think autonomous weapons aren’t the kind of thing AI safety folks are concerned about ending humanity