Mentioning “AI safety” doesn’t count. For the purposes of this market, she needs to discuss concerns and proposals for regulating AI safety as currently being discussed among scholars of this topic.
Proposed resolution basis (updated 31 Jul 2024 per comments):
Kamala Harris mentions something directly related to the key concerns in AI Safety as described here:
Problems in AI safety can be grouped into three categories: robustness, assurance, and specification. Robustness guarantees that a system continues to operate within safe limits even in unfamiliar settings; assurance seeks to establish that it can be analyzed and understood easily by human operators; and specification is concerned with ensuring that its behavior aligns with the system designer’s intentions.
Download Full Report
Key Concepts in AI Safety: An Overview
Related Documents
Authors
Originally Published
March 2021
Topics
Citation
Tim G. J. Rudner and Helen Toner, "Key Concepts in AI Safety: An Overview" (Center for Security and Emerging Technology, March 2021). https://doi.org/10.51593/20190040.
Examples of what would cause this market to resolve YES:
Commenting that AI systems need to:
operate within safe limits even in unfamiliar settings;
are understood easily by human operators; or
align with the system designer’s intentions.
Example that would not count for market resolution:
Commenting, “We need to make sure that AI systems are safe," without further elaborating.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ732 | |
2 | Ṁ719 | |
3 | Ṁ442 | |
4 | Ṁ388 | |
5 | Ṁ303 |
I think this resolves No now. At least the most recent thing I can find on Google is her 2023 remarks.
She has been the Biden administration's ai czar:
https://www.nytimes.com/2024/07/24/technology/kamala-harris-ai-regulation.html
I suspect she endorses the executive order and can speak intelligently about AI. Given that, it will be in her interest to do so.
(Flip side: she will have to defend the Biden administration's AI regulation, and will thus discuss ai safety)
I would suggest changing "[mentioning] anything related to the key concerns in AI Safety" to "something directly related"—and I think it'd be helpful to provide examples of what would(n't) pass. For instance, I would expect her to make direct reference to the challenges to ensuring that AI systems:
operate within safe limits even in unfamiliar settings;
are understood easily by human operators; or
align with the system designer’s intentions.
On the other have, a handwavey gesture like "we need to make sure that AI systems are safe," without further substance, wouldn't (as you mention, simply mentioning "AI safety" wouldn't count).