One worry I have is the possibility that the longtermist community (especially the funders) is actively repelling and pushing away the driver types – people who want to dive in and start doing (Phase 2 type) things.
This is my experience. I have been pushing forward Phase 2 type work (here) but have been told various things like: not to scale up, phase 1.5 work is not helpful, that we need more research first to know what we are doing, that any interactions with the real world is too risky. Such responses have helped push me away. And I know I am not the only one (e.g. the staff at the Longtermist Entrepreneurship Project team seemed to worry about this feature of longtermist culture too).
Not quite sure how to fix this. Maybe FTX will help. Maybe we should tell entrepreneurial/policy folk not to apply to the LTFF or other Phase 2 sceptical funders. Maybe just more discussion of the topic.
PS. Owen I am glad you are part of this community and thinking about these things. I thought this post was amazing. So Thank you for it. And great reply John.
I agree with you, and with John and the OP. I have had exactly the same experience of the Longtermist community pushing away Phase 2 work as you have—particularly in AI Alignment. If it’s not purely technical or theoretical lab work then the funding bodies have zero interest in funding it, and the community has barely much more interest than that in discussion. This creates a feedback loop of focus.
For example, there is a potentially very high impact opportunity in the legal sector right now to make a positive impact in AI Alignment. There are currently a string of ongoing court cases over AI transparency in the UK, particularly relating to government use, which could either result in the law saying that AI must be transparent to academia and public for audit (if the human rights side wins) OR the law saying that AI can be totally secret even when its use affects the public without them knowing (if the government wins). No prizes for guessing which side would better impact s-risk and AI Alignment research on misalignment as it evolves.
That’s a big oversimplification obviously, boiled down for forum use, but every AI Alignment person I speak to is absolutely horrified at the idea of getting involved in actual, adversarial AI Policy work. Saying “Hey, maybe EA should fund some AI and Law experts to advise the transparency lobby/lawyers on these cases for free” or “maybe we should start informing the wider public about AI Alignment risks so we can get AI Alignment on political agendas” at an AI Alignment workshop has a similar reaction to suggesting we all go skydiving without parachutes and see who reaches the ground first.
This lack of desire for Phase 2 work, or non-academic direct impact, harms us all in the long run. Most of the issues in AI alignment for example, or climate policy, or nuclear policy, require public and political will to become reality. By sticking to theoretical and Phase 1 work which is out of reach or out of interest to most of the public, we squander opportunity to show our ideas to the public at large and generate support—support we need to make many positive changes a reality.
It’s not that Phase 1 work isn’t useful, it’s critical, it’s just that Phase 2 work is what makes Phase 1 work a reality instead of just a thought experiment. Just look at any AI Governance or AI Policy group right now. There are a few good ones but most AI Policy work is research papers or thought experiments because they judge their own impact by this metric. If you say “The research is great, but what have you actually changed?” a lot of them flounder. They all state they want to make changes in AI Policy, but simultaneously have no concrete plan to do it and refuse all help to try.
In Longtermism, unfortunately, the emphasis tends to be much more on theory than action which makes sense. This is in some cases a very good thing because we don’t want to rush in with rash actions and make things worse—but if we don’t make any actions then what was the point of it all? All we did is sit around all day blowing other people’s money.
Maybe the Phase 2 work won’t work. Maybe that court case I mentioned will go wrong despite best efforts, or result in unintended consequences, or whatever. But the thing is without any Phase 2 work we won’t know. The only way to make action effective is to try it and get better at it and learn from it.
Because guess what? Those people who want AI misalignment, who dont care about climate change, who profit from pandemics, or who want nuclear weapons—they’ve got zero hesitation about Phase 2 at all.
One worry I have is the possibility that the longtermist community (especially the funders) is actively repelling and pushing away the driver types – people who want to dive in and start doing (Phase 2 type) things.
This is my experience. I have been pushing forward Phase 2 type work (here) but have been told various things like: not to scale up, phase 1.5 work is not helpful, that we need more research first to know what we are doing, that any interactions with the real world is too risky. Such responses have helped push me away. And I know I am not the only one (e.g. the staff at the Longtermist Entrepreneurship Project team seemed to worry about this feature of longtermist culture too).
Not quite sure how to fix this. Maybe FTX will help. Maybe we should tell entrepreneurial/policy folk not to apply to the LTFF or other Phase 2 sceptical funders. Maybe just more discussion of the topic.
PS. Owen I am glad you are part of this community and thinking about these things. I thought this post was amazing. So Thank you for it. And great reply John.
I agree with you, and with John and the OP. I have had exactly the same experience of the Longtermist community pushing away Phase 2 work as you have—particularly in AI Alignment. If it’s not purely technical or theoretical lab work then the funding bodies have zero interest in funding it, and the community has barely much more interest than that in discussion. This creates a feedback loop of focus.
For example, there is a potentially very high impact opportunity in the legal sector right now to make a positive impact in AI Alignment. There are currently a string of ongoing court cases over AI transparency in the UK, particularly relating to government use, which could either result in the law saying that AI must be transparent to academia and public for audit (if the human rights side wins) OR the law saying that AI can be totally secret even when its use affects the public without them knowing (if the government wins). No prizes for guessing which side would better impact s-risk and AI Alignment research on misalignment as it evolves.
That’s a big oversimplification obviously, boiled down for forum use, but every AI Alignment person I speak to is absolutely horrified at the idea of getting involved in actual, adversarial AI Policy work. Saying “Hey, maybe EA should fund some AI and Law experts to advise the transparency lobby/lawyers on these cases for free” or “maybe we should start informing the wider public about AI Alignment risks so we can get AI Alignment on political agendas” at an AI Alignment workshop has a similar reaction to suggesting we all go skydiving without parachutes and see who reaches the ground first.
This lack of desire for Phase 2 work, or non-academic direct impact, harms us all in the long run. Most of the issues in AI alignment for example, or climate policy, or nuclear policy, require public and political will to become reality. By sticking to theoretical and Phase 1 work which is out of reach or out of interest to most of the public, we squander opportunity to show our ideas to the public at large and generate support—support we need to make many positive changes a reality.
It’s not that Phase 1 work isn’t useful, it’s critical, it’s just that Phase 2 work is what makes Phase 1 work a reality instead of just a thought experiment. Just look at any AI Governance or AI Policy group right now. There are a few good ones but most AI Policy work is research papers or thought experiments because they judge their own impact by this metric. If you say “The research is great, but what have you actually changed?” a lot of them flounder. They all state they want to make changes in AI Policy, but simultaneously have no concrete plan to do it and refuse all help to try.
In Longtermism, unfortunately, the emphasis tends to be much more on theory than action which makes sense. This is in some cases a very good thing because we don’t want to rush in with rash actions and make things worse—but if we don’t make any actions then what was the point of it all? All we did is sit around all day blowing other people’s money.
Maybe the Phase 2 work won’t work. Maybe that court case I mentioned will go wrong despite best efforts, or result in unintended consequences, or whatever. But the thing is without any Phase 2 work we won’t know. The only way to make action effective is to try it and get better at it and learn from it.
Because guess what? Those people who want AI misalignment, who dont care about climate change, who profit from pandemics, or who want nuclear weapons—they’ve got zero hesitation about Phase 2 at all.