Will Kamala Harris talk about AI safety between now and Election Day?
111
Ṁ13k
Nov 5
20%
chance

Mentioning “AI safety” doesn’t count. For the purposes of this market, she needs to discuss concerns and proposals for regulating AI safety as currently being discussed among scholars of this topic.

Proposed resolution basis (updated 31 Jul 2024 per comments):

Kamala Harris mentions something directly related to the key concerns in AI Safety as described here:

Problems in AI safety can be grouped into three categories: robustness, assurance, and specification. Robustness guarantees that a system continues to operate within safe limits even in unfamiliar settings; assurance seeks to establish that it can be analyzed and understood easily by human operators; and specification is concerned with ensuring that its behavior aligns with the system designer’s intentions.

Download Full Report

Key Concepts in AI Safety: An Overview

Related Documents

Authors

Tim G. J. Rudner Helen Toner

Originally Published

March 2021

Topics

Assessment

Citation

Tim G. J. Rudner and Helen Toner, "Key Concepts in AI Safety: An Overview" (Center for Security and Emerging Technology, March 2021). https://doi.org/10.51593/20190040.

Examples of what would cause this market to resolve YES:

Commenting that AI systems need to:

  • operate within safe limits even in unfamiliar settings;

  • are understood easily by human operators; or

  • align with the system designer’s intentions.

Example that would not count for market resolution:

Commenting, “We need to make sure that AI systems are safe," without further elaborating.

Get Ṁ1,000 play money
Sort by:
reposted

Maybe at the DNC?

bought Ṁ350 YES

She has been the Biden administration's ai czar:

https://www.nytimes.com/2024/07/24/technology/kamala-harris-ai-regulation.html

I suspect she endorses the executive order and can speak intelligently about AI. Given that, it will be in her interest to do so.

(Flip side: she will have to defend the Biden administration's AI regulation, and will thus discuss ai safety)

Do these discussions have to relate to existential risk, or is any kind of danger acceptable?

Any kind of danger as outlined by the resolution source is acceptable, specific context should be regulatory or legislative solutions / mitigations.

So, Harris has to mention robustness, assurance, specification, or something directly related to these concepts, correct?

Yes. If she uses different words that show a familiarity with the concepts, I’ll accept it.

I’m hoping since she was Biden’s AI Tzar there is a reasonable chance.

I also welcome suggestions for how to improve the resolution criteria.

I would suggest changing "[mentioning] anything related to the key concerns in AI Safety" to "something directly related"—and I think it'd be helpful to provide examples of what would(n't) pass. For instance, I would expect her to make direct reference to the challenges to ensuring that AI systems:

  • operate within safe limits even in unfamiliar settings;

  • are understood easily by human operators; or

  • align with the system designer’s intentions.

On the other have, a handwavey gesture like "we need to make sure that AI systems are safe," without further substance, wouldn't (as you mention, simply mentioning "AI safety" wouldn't count).

Thanks, this looks like a great refinement. I’ll look at it closely and incorporate it carefully when I’m at my desk later today.