Ah sweet, thank you! Didn’t know this existed, glad to see it and just used it :)
leopold
OpenAI’s new Preparedness team is hiring
Want to win the AGI race? Solve alignment.
Nobody’s on the ball on AGI alignment
The FTX Future Fund team has resigned
“Develop Anthropomorphic AGI to Save Humanity from Itself” (Future Fund AI Worldview Prize submission)
“AI predictions” (Future Fund AI Worldview Prize submission)
“AGI timelines: ignore the social factor at their peril” (Future Fund AI Worldview Prize submission)
Ah sweet, thank you! Didn’t know this existed, glad to see it and just used it :)
Announcing the Future Fund’s AI Worldview Prize
Would it be possible for the EA forum to add footnote functionality? Thanks!
leopold’s Quick takes
That’s really cool to hear! Excited about your work!
This is a blog post and we meant to reference a month from when we published the blog post. Sorry for the confusion!
Thanks for your comment Odin! At this point, we’ve finished considering regranting expressions of interest and have invited the regrantors for the initial test.
Future Fund June 2022 Update
We’re planning to invite additional regrantors by the end of this month or so. We are evaluating regrantor expressions of interest/referrals for regrantors on a rolling basis, so please send these in as soon as possible.
You are welcome to apply now!
Regrantors are able to make grants to people they know (in fact, having a diverse network is part of what makes for an effective regrantor); they just have to disclose if there’s a conflict of interest, and we may reject a grant if we don’t feel comfortable with it on those grounds.We don’t currently have a network for regrantors that is open for external people to join.
Thanks! We are not planning to publish the list of regrantors for now.
Yep—and in particular, they are looking to hire people who do well on their Preparedness challenge: https://openai.com/form/preparedness-challenge. So if you’re interested, try that out!