This market resolves YES if Open Philanthropy publishes a relatively open-ended call or application for individual AI safety researchers by the end of 2024. For the market to resolve positively, the application needs to allow researchers to propose their own research directions and must not be limited to academic positions.
I'm roughly thinking of an application like these, but allowing for longer research projects (e.g. for a full year) and without a primary framing of 'community-building' or 'capacity-building':
https://www.openphilanthropy.org/career-development-and-transition-funding/
https://airtable.com/appivZBYDmjPU0SmE/shrOiSw9i5WJPnlWj
If it is the kind of program that can fund research like the following, the market will resolve YES: https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector
@firstuserhere Also, interested in hearing your motivations/thoughts behind this question, beyond what the description says, @JonasVollmer. Does open phil not have a early career grant, or open grants for working in technical ai safety, already?
@firstuserhere Not to my knowledge. I created this question because I might set up something like this if Open Phil doesn't do it.