Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don’t feel like I have a great picture of the details here.
If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I’d hope that we could eventually identify opportunities for long-term impact that aren’t “find a small set of particularly highly talented researchers”, but things more like, “spend X dollars advertising Y in a way that could scale” or “build a sizeable organization of people that don’t all need to be top-tier researchers”.
Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
Movement-building projects that give a bad first impression of longtermists
Projects that risk attracting a lot of controversy or bad press
Projects with ‘poisoning the well’ effects where if it’s executed poorly the first time, someone trying it again will have a harder time—e.g., if a large-scale project doing EA outreach to highschoolers went poorly, I think a subsequent project would have a much harder time getting buy-in from parents.
More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level work to be positive—if we want the community to attract people who do high-quality work, we should fund primarily high-quality work. Since the LTFF does not have infinite money, I don’t think this has much of an effect on my funding decisions, but I’d have to think about it more explicitly if we end up with much more money than our current funding bar requires. (There are also other obvious reasons not to fund all positive-EV things, e.g. if we expected to be able to use the money better in the future.)
I think it would be good to have scalable interventions for impact. A few thoughts on this:
At the org-level, there’s a bottleneck in mentorship and organizational capacity, and loosening it would allow us to take on more inexperienced people. I don’t know of a good way to fix this other than funding really good people to create orgs and become mentors. I think existing orgs are very aware of this bottleneck and working on it, so I’m optimistic that this will get much better over time.
Personally, I’m interested in experimenting with trying to execute specific high-value projects by actively advertising them and not providing significant mentorship (provided there aren’t negative externalities to the project not being executed well). I’m currently discussing this with the fund.
Overall, I think we will always be somewhat bottlenecked by having really competent people who want to work on longtermist projects, and I would be excited for people to think of scalable interventions for this in particular. I don’t have any great ideas here off the top of my head.
I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:
If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.
My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.
Thanks so much for this, that was informative. A few quick thoughts:
“Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along”
I’ve heard this one before and I could sympathize with it, but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.) Big companies often don’t have the ideal teams for new initiatives. Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place.
In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up. But if this is the case it would be obviously severely limiting. The obvious solution to this would be to have bigger orgs with more possibility. Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years.
“I think it would be good to have scalable interventions for impact.” In terms of money, I’ve been thinking about this too. If this were a crucial strategy it seems like the kind of thing that could get a lot more attention. For instance, new orgs that focus heavily on ways to decently absorb a lot of money in the future.
Some ideas I’ve had:
- Experiment with advertising campaigns that could be clearly scaled up. Some of them seem linearly useful up to millions of dollars.
- Add additional resources to make existing researchers more effective.
- Buy the rights to books and spend on marketing for the key ones.
- Pay for virtual assistants and all other things that could speed researchers out.
- Add additional resources to make nonprofits more effective, easily.
- Better budgets for external contractors.
- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.
While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully.
Having come from the tech sector, in particular, it feels like there are often much more stingy expectations placed on EA researchers.
In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up. But if this is the case it would be obviously severely limiting.
To clarify, I don’t think that most projects will be actively harmful—in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-facing work or engage with policymakers. From a purely hits-based perspective, I think there’s still a dearth of projects that have a non-trivial chance of being successful, and this is much more limiting than projects being not as good as the next project to come along.
The obvious solution to this would be to have bigger orgs with more possibility.
I agree with this. Maybe another thing that could help would be to have safety nets such that EAs who overall do good work could start and wind down projects without being worried about sustaining their livelihood or the livelihood of their employees? Though this could also create some pretty bad incentives.
Some ideas I’ve had:
Thanks for these, I haven’t thought about this much in depth and think these are overall very good ideas that I would be excited to fund. In particular:
- Experiment with advertising campaigns that could be clearly scaled up. Some of them seem linearly useful up to millions of dollars.
I agree with this; I think there’s a big opportunity to do better and more targeted marketing in a way that could scale. I’ve discussed this with people and would be interested in funding someone who wanted to do this thoughtfully.
- Add additional resources to make existing researchers more effective.
- Pay for virtual assistants and all other things that could speed researchers out.
- Add additional resources to make nonprofits more effective, easily.
Also super agree with this. I think an unfortunate component here is that many altruistic people are irrationally frugal, including me—I personally feel somewhat weird about asking for money to have a marginally more ergonomic desk set-up or an assistant, but I generally endorse people doing this and would be happy to fund them (or other projects making researchers more effective).
- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.
I think historically, people have found it pretty hard to outsource things like this to non-EAs, though I agree with this in theory.
---
One total guess at an overarching theme for why we haven’t done some of these things already is that people implicitly model longtermist movement growth on the growth of academic fields, which grow via slowly accruing prestige and tractable work to do over time, rather than modeling them as a tech company the way you describe. I think there could be good reasons for this—in particular, putting ourselves in the reference class of an academic field might attract the kind of people who want to be academics, which are generally the kinds of people we want—people who are very smart and highly-motivated by the work itself rather than other perks of the job. For what it’s worth, though, my guess is that the academic model is suboptimal, and we should indeed move to a more tech-company like model on many dimensions.
- Pay for virtual assistants and all other things that could speed researchers out.
As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them.
Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).
Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don’t feel like I have a great picture of the details here.
If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I’d hope that we could eventually identify opportunities for long-term impact that aren’t “find a small set of particularly highly talented researchers”, but things more like, “spend X dollars advertising Y in a way that could scale” or “build a sizeable organization of people that don’t all need to be top-tier researchers”.
Some things I think could actively cause harm:
Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
Movement-building projects that give a bad first impression of longtermists
Projects that risk attracting a lot of controversy or bad press
Projects with ‘poisoning the well’ effects where if it’s executed poorly the first time, someone trying it again will have a harder time—e.g., if a large-scale project doing EA outreach to highschoolers went poorly, I think a subsequent project would have a much harder time getting buy-in from parents.
More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level work to be positive—if we want the community to attract people who do high-quality work, we should fund primarily high-quality work. Since the LTFF does not have infinite money, I don’t think this has much of an effect on my funding decisions, but I’d have to think about it more explicitly if we end up with much more money than our current funding bar requires. (There are also other obvious reasons not to fund all positive-EV things, e.g. if we expected to be able to use the money better in the future.)
I think it would be good to have scalable interventions for impact. A few thoughts on this:
At the org-level, there’s a bottleneck in mentorship and organizational capacity, and loosening it would allow us to take on more inexperienced people. I don’t know of a good way to fix this other than funding really good people to create orgs and become mentors. I think existing orgs are very aware of this bottleneck and working on it, so I’m optimistic that this will get much better over time.
Personally, I’m interested in experimenting with trying to execute specific high-value projects by actively advertising them and not providing significant mentorship (provided there aren’t negative externalities to the project not being executed well). I’m currently discussing this with the fund.
Overall, I think we will always be somewhat bottlenecked by having really competent people who want to work on longtermist projects, and I would be excited for people to think of scalable interventions for this in particular. I don’t have any great ideas here off the top of my head.
I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:
If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.
My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.
Thanks so much for this, that was informative. A few quick thoughts:
I’ve heard this one before and I could sympathize with it, but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.) Big companies often don’t have the ideal teams for new initiatives. Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place.
In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up. But if this is the case it would be obviously severely limiting. The obvious solution to this would be to have bigger orgs with more possibility. Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years.
Some ideas I’ve had:
- Experiment with advertising campaigns that could be clearly scaled up. Some of them seem linearly useful up to millions of dollars.
- Add additional resources to make existing researchers more effective.
- Buy the rights to books and spend on marketing for the key ones.
- Pay for virtual assistants and all other things that could speed researchers out.
- Add additional resources to make nonprofits more effective, easily.
- Better budgets for external contractors.
- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.
While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully.
Having come from the tech sector, in particular, it feels like there are often much more stingy expectations placed on EA researchers.
To clarify, I don’t think that most projects will be actively harmful—in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-facing work or engage with policymakers. From a purely hits-based perspective, I think there’s still a dearth of projects that have a non-trivial chance of being successful, and this is much more limiting than projects being not as good as the next project to come along.
I agree with this. Maybe another thing that could help would be to have safety nets such that EAs who overall do good work could start and wind down projects without being worried about sustaining their livelihood or the livelihood of their employees? Though this could also create some pretty bad incentives.
Thanks for these, I haven’t thought about this much in depth and think these are overall very good ideas that I would be excited to fund. In particular:
I agree with this; I think there’s a big opportunity to do better and more targeted marketing in a way that could scale. I’ve discussed this with people and would be interested in funding someone who wanted to do this thoughtfully.
Also super agree with this. I think an unfortunate component here is that many altruistic people are irrationally frugal, including me—I personally feel somewhat weird about asking for money to have a marginally more ergonomic desk set-up or an assistant, but I generally endorse people doing this and would be happy to fund them (or other projects making researchers more effective).
I think historically, people have found it pretty hard to outsource things like this to non-EAs, though I agree with this in theory.
---
One total guess at an overarching theme for why we haven’t done some of these things already is that people implicitly model longtermist movement growth on the growth of academic fields, which grow via slowly accruing prestige and tractable work to do over time, rather than modeling them as a tech company the way you describe. I think there could be good reasons for this—in particular, putting ourselves in the reference class of an academic field might attract the kind of people who want to be academics, which are generally the kinds of people we want—people who are very smart and highly-motivated by the work itself rather than other perks of the job. For what it’s worth, though, my guess is that the academic model is suboptimal, and we should indeed move to a more tech-company like model on many dimensions.
Again, I agree with Asya. A minor side remark:
As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them.
Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).
There will likely be a more elaborate reply, but these two links could be useful.
Thanks!