AMA: Markus Anderljung (PM at GovAI, FHI)

EDIT: I’m no longer actively checking this post for questions, but I’m likely to periodically check.

Hello, I work at the Centre for the Governance of AI (GovAI), part of the Future of Humanity Institute (FHI), University of Oxford, as a project manager, where I put time into e.g. recruitment, research management, policy engagement, and operations.

FHI and GovAI are hiring for a number of roles. Happy to answer questions about them:

  • GovAI is hiring a Project Manager, to work alongside myself. Deadline September 30th.

  • FHI is hiring researchers, across three levels of seniority and all our research groups (including GovAI). Deadline Oct 19th.

  • The Future of Humanity Foundation, a new organisation aimed at supporting FHI, is hiring a CEO. Deadline September 28th.

  • We’re likely to open for applications to our GovAI Fellowship over the next month or so, a 3-month research stint aimed at helping people get up to speed with and test their fit for AI governance research, likely starts in Jan or July 2021.

Relevant things folks at GovAI have published in 2020:

A little more about me:

  • At GovAI, I’ve been especially involved e.g. with our research on public and ML researcher views on AI governance and forecasting (led by Baobao Zhang), implications of increased data efficiency (led by Aaron Tucker), the NeurIPS Broader Impact Statement Requirement (led by Carolyn Ashurst & Carina Prunkl), our submission on the EU Trustworthy AI Whitepaper (led by Stefan Torges), and on the what we can learn about AI governance from the governance of previous powerful technologies.

  • Before coming to GovAI in 2018, I worked as the Executive Director of EA Sweden, e.g. running a project promoting representation for future generations (more info here). I’ve also worked as a management consultant at EY Stockholm, and I ran the Giving What We Can: Cambridge group (now EA Cambridge) for a year.

  • I was encouraged to write down some of my potentially unusual views to spur discussion. Here are some of them:

    • There should be more EA community-building efforts focused on professionals, say people with a few years of experience.

    • I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain /​ set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.

    • There is a vast array of important research that doesn’t get done because people don’t find it interesting enough.

    • People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.

    • Work on EU AI Policy is plausibly comparably impactful to US policy on the current margin, in particular over the next few years as the EU Commission’s White Paper on AI is translated into concrete policy.

    • I think the majority of AI risk is structural – as opposed to stemming from malicious use or accidents – e.g. technological unemployment leading to political instability, competitive pressures, or decreased value of labour undermining liberal values.

    • Some forms of expertise which I’m excited to have more of at GovAI include: institutional design (e.g. how should the next Facebook Oversight Board-esque institution be set up), transforming our research insights into policy proposals (e.g. answering questions like what EU AI Policy we should push for, how a system to monitor compute could be set up), AI forecasting, along with relevant bits of history and economics.