I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.
It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.
Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over a number of different things, including fundraising, helping out other EA-adjacent public figures, support for GPI, CEA and 80,000 Hours, writing additions to What We Owe The Future and helping with the print textbook version of utilitarianism.net that’s coming out next year. It’s also personally been the toughest year of my life; my mental health has been at its worst in over a decade, and I’ve been trying to deal with that, too.
At the moment, I’m doing three main things:
- Some public engagement, in particular around the WWOTF paperback and foreign language book launches and at EAGxBerlin. This has been and will be lower-key than the media around WWOTF last year, and more focused on in-person events; I’m also more focused on fundraising than I was before.
- Research into “trajectory changes”: in particular, ways of increasing the wellbeing of future generations other than ‘standard’ existential risk mitigation strategies, in particular on issues that arise even if we solve AI alignment, like digital sentience and the long reflection. I’m also doing some learning to try to get to grips on how to update properly on the latest developments in AI, in particular with respect to the probability of an intelligence explosion in the next decade, and on how hard we should expect AI alignment to be.
- Gathering information for what I should focus on next. In the medium term, I still plan to be a public proponent of EA-as-an-idea, which I think plays to my comparative advantage, and because I’m worried about people neglecting “EA qua EA”. If anything, all the crises faced by EA and by the world in the last year has reminded me of just how deeply I believe in EA as a project, and how the message of taking a thoughtful, humble, and scientific approach to doing good is more important than ever. The precise options I’m considering are still quite wide-ranging, including: a podcast and/or YouTube show and/or substack; a book on effective giving; a book on evidence-based living; or deeper research into the ethics and governance questions that arise even if we solve AI alignment. I hope to decide on that by the end of the year.
Thank you so much for your work with EV over the last year, Howie! It was enormously helpful to have someone so well-trusted, with such excellent judgment, in this position. I’m sure you’ll have an enormous positive impact at Open Phil.
And welcome, Rob—I think it’s fantastic news that you’ve taken the role!