Thanks for writing this. I had had similar thoughts. I have some scattered observations:
I find those bullshitty personality tests one sometimes does on work retreats quite instructive on this. On personality tests of analytic/driver/affable/expressive, EAs probably cluster hard in the analytic section. But really great executors like Bezos, Zuckerberg, Musk, Cummings are strongly in the driver section. I have heard that successful businesses are often led by combo teams of drivers and analytical people who can moderate the excesses of each type. The driver personality type can seem anathema to EA/analytic thinking because it can be quite personality-led and so is a bit like following a guru: their plans can often be obscure and can sometimes seem unrealistic and they’re often not super analytical and careful—that can be why they take these really risky bets. Like it would be difficult to believe that Musk would create the world’s highest valued car company and it would be difficult for him to explain to you how he was going to do it. Nevertheless, he did it, and it would have been sensible to follow him. I think we need to recognise the cultural barriers to execution in EA.
This suggests that we need to work really hard to cultivate and support the executors in the community. Alternatively, we could pull in executors from outside EA and point them at valuable projects.
Now is a good time for people to test out being executors to see whether they are a good fit.
I think this counts in favour of working on valuable projects even if they are probably not what’s best from a longtermist point of view. eg from what I have seen, I don’t think EAs did much to reduce the damage from COVID; others like Cowen’s Fast Grants had much more impact. It is true that much bigger bio-disasters comprise a bigger fraction of the (large) risk this century. But (a) at the very least getting some practice in of doing something useful in a crisis seems like a good idea for career capital-type reasons , (b) it builds credibility in the relevant fields, (c) we can test who is good at acting in a crisis and back them in the future.
One worry I have is the possibility that the longtermist community (especially the funders) is actively repelling and pushing away the driver types – people who want to dive in and start doing (Phase 2 type) things.
This is my experience. I have been pushing forward Phase 2 type work (here) but have been told various things like: not to scale up, phase 1.5 work is not helpful, that we need more research first to know what we are doing, that any interactions with the real world is too risky. Such responses have helped push me away. And I know I am not the only one (e.g. the staff at the Longtermist Entrepreneurship Project team seemed to worry about this feature of longtermist culture too).
Not quite sure how to fix this. Maybe FTX will help. Maybe we should tell entrepreneurial/policy folk not to apply to the LTFF or other Phase 2 sceptical funders. Maybe just more discussion of the topic.
PS. Owen I am glad you are part of this community and thinking about these things. I thought this post was amazing. So Thank you for it. And great reply John.
I agree with you, and with John and the OP. I have had exactly the same experience of the Longtermist community pushing away Phase 2 work as you have—particularly in AI Alignment. If it’s not purely technical or theoretical lab work then the funding bodies have zero interest in funding it, and the community has barely much more interest than that in discussion. This creates a feedback loop of focus.
For example, there is a potentially very high impact opportunity in the legal sector right now to make a positive impact in AI Alignment. There are currently a string of ongoing court cases over AI transparency in the UK, particularly relating to government use, which could either result in the law saying that AI must be transparent to academia and public for audit (if the human rights side wins) OR the law saying that AI can be totally secret even when its use affects the public without them knowing (if the government wins). No prizes for guessing which side would better impact s-risk and AI Alignment research on misalignment as it evolves.
That’s a big oversimplification obviously, boiled down for forum use, but every AI Alignment person I speak to is absolutely horrified at the idea of getting involved in actual, adversarial AI Policy work. Saying “Hey, maybe EA should fund some AI and Law experts to advise the transparency lobby/lawyers on these cases for free” or “maybe we should start informing the wider public about AI Alignment risks so we can get AI Alignment on political agendas” at an AI Alignment workshop has a similar reaction to suggesting we all go skydiving without parachutes and see who reaches the ground first.
This lack of desire for Phase 2 work, or non-academic direct impact, harms us all in the long run. Most of the issues in AI alignment for example, or climate policy, or nuclear policy, require public and political will to become reality. By sticking to theoretical and Phase 1 work which is out of reach or out of interest to most of the public, we squander opportunity to show our ideas to the public at large and generate support—support we need to make many positive changes a reality.
It’s not that Phase 1 work isn’t useful, it’s critical, it’s just that Phase 2 work is what makes Phase 1 work a reality instead of just a thought experiment. Just look at any AI Governance or AI Policy group right now. There are a few good ones but most AI Policy work is research papers or thought experiments because they judge their own impact by this metric. If you say “The research is great, but what have you actually changed?” a lot of them flounder. They all state they want to make changes in AI Policy, but simultaneously have no concrete plan to do it and refuse all help to try.
In Longtermism, unfortunately, the emphasis tends to be much more on theory than action which makes sense. This is in some cases a very good thing because we don’t want to rush in with rash actions and make things worse—but if we don’t make any actions then what was the point of it all? All we did is sit around all day blowing other people’s money.
Maybe the Phase 2 work won’t work. Maybe that court case I mentioned will go wrong despite best efforts, or result in unintended consequences, or whatever. But the thing is without any Phase 2 work we won’t know. The only way to make action effective is to try it and get better at it and learn from it.
Because guess what? Those people who want AI misalignment, who dont care about climate change, who profit from pandemics, or who want nuclear weapons—they’ve got zero hesitation about Phase 2 at all.
I agree with quite a bit of this. I particularly want to highlight the point about combo teams of drivers and analytical people — I think EA doesn’t just want more executors, but more executor/analyst teams that work really well together. I think that because of the lack of feedback loops on whether work is really helpful for longterm outcomes we’ll often really need excellent analysts embedded at the heart of execute-y teams. So this means that as well as cultivating executors we want to cultivate analyst types who can work well with executors.
Thanks for writing this. I had had similar thoughts. I have some scattered observations:
I find those bullshitty personality tests one sometimes does on work retreats quite instructive on this. On personality tests of analytic/driver/affable/expressive, EAs probably cluster hard in the analytic section. But really great executors like Bezos, Zuckerberg, Musk, Cummings are strongly in the driver section. I have heard that successful businesses are often led by combo teams of drivers and analytical people who can moderate the excesses of each type. The driver personality type can seem anathema to EA/analytic thinking because it can be quite personality-led and so is a bit like following a guru: their plans can often be obscure and can sometimes seem unrealistic and they’re often not super analytical and careful—that can be why they take these really risky bets. Like it would be difficult to believe that Musk would create the world’s highest valued car company and it would be difficult for him to explain to you how he was going to do it. Nevertheless, he did it, and it would have been sensible to follow him. I think we need to recognise the cultural barriers to execution in EA.
This suggests that we need to work really hard to cultivate and support the executors in the community. Alternatively, we could pull in executors from outside EA and point them at valuable projects.
Now is a good time for people to test out being executors to see whether they are a good fit.
I think this counts in favour of working on valuable projects even if they are probably not what’s best from a longtermist point of view. eg from what I have seen, I don’t think EAs did much to reduce the damage from COVID; others like Cowen’s Fast Grants had much more impact. It is true that much bigger bio-disasters comprise a bigger fraction of the (large) risk this century. But (a) at the very least getting some practice in of doing something useful in a crisis seems like a good idea for career capital-type reasons , (b) it builds credibility in the relevant fields, (c) we can test who is good at acting in a crisis and back them in the future.
One worry I have is the possibility that the longtermist community (especially the funders) is actively repelling and pushing away the driver types – people who want to dive in and start doing (Phase 2 type) things.
This is my experience. I have been pushing forward Phase 2 type work (here) but have been told various things like: not to scale up, phase 1.5 work is not helpful, that we need more research first to know what we are doing, that any interactions with the real world is too risky. Such responses have helped push me away. And I know I am not the only one (e.g. the staff at the Longtermist Entrepreneurship Project team seemed to worry about this feature of longtermist culture too).
Not quite sure how to fix this. Maybe FTX will help. Maybe we should tell entrepreneurial/policy folk not to apply to the LTFF or other Phase 2 sceptical funders. Maybe just more discussion of the topic.
PS. Owen I am glad you are part of this community and thinking about these things. I thought this post was amazing. So Thank you for it. And great reply John.
I agree with you, and with John and the OP. I have had exactly the same experience of the Longtermist community pushing away Phase 2 work as you have—particularly in AI Alignment. If it’s not purely technical or theoretical lab work then the funding bodies have zero interest in funding it, and the community has barely much more interest than that in discussion. This creates a feedback loop of focus.
For example, there is a potentially very high impact opportunity in the legal sector right now to make a positive impact in AI Alignment. There are currently a string of ongoing court cases over AI transparency in the UK, particularly relating to government use, which could either result in the law saying that AI must be transparent to academia and public for audit (if the human rights side wins) OR the law saying that AI can be totally secret even when its use affects the public without them knowing (if the government wins). No prizes for guessing which side would better impact s-risk and AI Alignment research on misalignment as it evolves.
That’s a big oversimplification obviously, boiled down for forum use, but every AI Alignment person I speak to is absolutely horrified at the idea of getting involved in actual, adversarial AI Policy work. Saying “Hey, maybe EA should fund some AI and Law experts to advise the transparency lobby/lawyers on these cases for free” or “maybe we should start informing the wider public about AI Alignment risks so we can get AI Alignment on political agendas” at an AI Alignment workshop has a similar reaction to suggesting we all go skydiving without parachutes and see who reaches the ground first.
This lack of desire for Phase 2 work, or non-academic direct impact, harms us all in the long run. Most of the issues in AI alignment for example, or climate policy, or nuclear policy, require public and political will to become reality. By sticking to theoretical and Phase 1 work which is out of reach or out of interest to most of the public, we squander opportunity to show our ideas to the public at large and generate support—support we need to make many positive changes a reality.
It’s not that Phase 1 work isn’t useful, it’s critical, it’s just that Phase 2 work is what makes Phase 1 work a reality instead of just a thought experiment. Just look at any AI Governance or AI Policy group right now. There are a few good ones but most AI Policy work is research papers or thought experiments because they judge their own impact by this metric. If you say “The research is great, but what have you actually changed?” a lot of them flounder. They all state they want to make changes in AI Policy, but simultaneously have no concrete plan to do it and refuse all help to try.
In Longtermism, unfortunately, the emphasis tends to be much more on theory than action which makes sense. This is in some cases a very good thing because we don’t want to rush in with rash actions and make things worse—but if we don’t make any actions then what was the point of it all? All we did is sit around all day blowing other people’s money.
Maybe the Phase 2 work won’t work. Maybe that court case I mentioned will go wrong despite best efforts, or result in unintended consequences, or whatever. But the thing is without any Phase 2 work we won’t know. The only way to make action effective is to try it and get better at it and learn from it.
Because guess what? Those people who want AI misalignment, who dont care about climate change, who profit from pandemics, or who want nuclear weapons—they’ve got zero hesitation about Phase 2 at all.
I agree with quite a bit of this. I particularly want to highlight the point about combo teams of drivers and analytical people — I think EA doesn’t just want more executors, but more executor/analyst teams that work really well together. I think that because of the lack of feedback loops on whether work is really helpful for longterm outcomes we’ll often really need excellent analysts embedded at the heart of execute-y teams. So this means that as well as cultivating executors we want to cultivate analyst types who can work well with executors.