Will the US implement information security requirements for frontier AI models by 2028?
Mini
4
Ṁ1712028
88%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 mandating information security protections (including cyber, physical, and personnel security) for frontier AI models. These security measures should be in place during model training to limit unintended proliferation of dangerous models. Frontier AI models are those with highly general capabilities (over a certain threshold) or trained with a certain compute budget (e.g. as much compute as $1 billion can buy today).
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
Get Ṁ1,000 play money
Related questions
Related questions
Will the US require a license to develop frontier AI models by 2028?
39% chance
Will the US implement software export controls for frontier AI models by 2028?
74% chance
Will a regulatory body modeled on the FDA regulate AI in the US by the end of 2027?
17% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US fund defensive information security R&D for limiting unintended proliferation of dangerous AI models by 2028
53% chance
Will a new lab create a top-performing AI frontier model before 2028?
45% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
27% chance
Will the US restrict transfer of trained AI models before 2026? (Deny ≥100 countries)
24% chance
Will the US regulate AI development by end of 2025?
45% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance