Open call: ā€œExistential risk of AI: technical conditionsā€

For anyone who would like to spend 5 months ā€œestimating AI X-riskā€ (including sceptics) and get handsomely paid for it.

Open tender by the Institute for Technology and Assessment and Impact Analysis at KIT (Germany):
Open Call Link (all info is there)
Context to the open call

  • Project period: ~May 20 - October 20, 2025

Interesting for anyone who wants to ā€œcause-prioritise Alā€ (in a way, this tender aims to get to a p(doom)) and get paid for it (25-60/​h+) šŸ’µ. Kind of like the AI Safety Summit report but specifically for x-risk.

I’ve started writing a proposal, my thesis was on exactly this topic, so I am just expanding on this.

  • 🫵 I’m looking for someone to join me/​happy to join someone else
    (=> essentially reviewing evidence of instrumental convergence, which is not a big pool of papers to look at)

šŸ“„ Just write me a message—including if you want to learn more!

I’ll be active here (pls have a low bar for reaching out, and do spread the word!)