If AI safety is divided by left/right politics in the next 5 years, will the left be more pro-regulation than the right?
➕
Plus
96
Ṁ11k
2028
69%
chance

I'll resolve this entirely subjectively. (I'll stay out of betting in this market though.)

Resolves NA if AI safety doesn't become politically divided.

Get Ṁ1,000 play money
Sort by:

Is this asking about a particular country or globally? AI safety in the UK might be more associated with the Conservative Party at present.

Globally, though I’m not super familiar with how politics works in countries outside the US. If it seems like AI splits in different directions for US vs non-US, I’ll probably make a poll to ask people on here what side they think AI regulation is more associated with

I think it would be good to limit to one country, and create different markets for others

@causal_agency In the US, (one year after you wrote that), X-risk AI safety seems to be disregarded by the conservatives. Is it still a conservative issue in the UK?

It appears that both major US parties are focusing on the implications of AI as a substitute for labor. Biden did roll out some initiatives to study AI risk, but Trump is pulling back. (For example, he rescinded Executive Order 14110 on day 1. (https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/))

Do you include Fairness, Accountability, and Transparency as part of AI safety, or is this specifically about existential risk?

@ahalekelly Just x-risk

@NathanNguyen So if the left is more pro-regulation than the right because of 99% fairness diversity inclusion reasons and 1% x-risk how would you resolve? What if it’s 90-10 or 50-50?

@mariopasquato I’m not sure it makes sense to quantify things in that way. It’s more of a “I know it when I see it” kind of thing.

@NathanNguyen Any idea on how to make this less arbitrary? Where does x-risk end and concerns about jobs and discrimination begin? Right now if you read the famous open letter signed by Bengio, Wozniak etc… would you conclude that the motive is x-risk or just negative economic and social impact?

predicts YES

Always

predicts YES

@Gigacasting that is why they are the ones trying to outlaw medication, right? oh wait, no

The definition of ai has been stretched to be meaningless. Is the tiktok ban ai? Every service pretends to be ai.

@DerkDicerk I mean AI of the sort that people fear will end humanity

@NathanNguyen those are autonomous weapons regulations and are nonpartisan

@DerkDicerk I think autonomous weapons aren’t the kind of thing AI safety folks are concerned about ending humanity

Comment hidden
Comment hidden