Edit: this essay is great and I’m excited and I wanted to build on what Peter wrote so much I didn’t even finish reading the whole thing before I’ve started in great volume commenting on each individual point. I believe I’ve gone over something Peter already covered in the OP, before I realized it. I’ll edit that out for brevity, but forgive me if I miss something and I’m just needlessly repeating Peter.
[Epistemic Status: heady, giddy and rapid hypothesis generation]
Meta Trap #2: Meta Orgs Risk Curling In On Themselves
The problem is that if we spread the idea that meta-orgs are the highest impact opportunity too well, we risk the creation of a meta-movement to spread the meta-movement and nothing else. Once meta-orgs get to the point where it’s all about EAs helping other EAs to help EAs, we’ve gotten to the point where there’s serious risk that actual impact won’t occur.
Consider that plan again where we get someone to full-time find chapter advisors for setting up lots of EA chapters. Now imagine that instead of advocating to the college students that they earn to give for GiveWell charities we suggest that this chapter building project is really the best possible thing to be doing, so they should get involved in, donate to, and volunteer for it. Now we’ve got chapters developing chapters to perpetuate developing more chapters. But what does this actually accomplish? We might as well have them working to set up Club Club.
I perceive two pitfalls here. Firstly, logistics and administration may become more difficult as the meta-project grows. For however many levels n an organization goes meta, such that n is the number of steps removed the management of the project in question is from the object-level goals of effective altruism, there is going to be more people, information, and organizations to keep up with. As a project gets more meta, it will become more difficult to convince effective altruists in general to increase its funding so it can scale, or become even more meta. So, a project that from the start seeks to go more meta in an unrestrained way will also face constant talent and financial constraints. As it go more meta, it will be difficult to find fitting employees, and due to how abstract the project would become, it would become difficult to explain in the first place how the project would work, let alone how skeptical said funders would otherwise be. If a project receives all their funding from one major donor, or a single consortium of donors, the project managers risk losing the independence to run their project as they see fit without being held back by constant questions or donors/directors steering the project in a new direction. This is why, e.g., Givewell doesn’t want to receive all its funding from Good Ventures.
To run a meta-project in a nimble way which reacts quickly to changing circumstances and steep learning curves seems necessary for running meta-projects, as they’re almost always breaking new ground in the non-profit sector and/or effective altruism when they’re founded. So, they can’t risk losing their independence by courting only one donor who may go on seekign to steer the ship themselves. If the meta-project in question was constrained to effective altruism, and it was otherwise facing financial constraints, I’d be skeptical of any claim they could find sufficient funding by going outside the effective altruism community.
Second, there is a temptation for meta-projects to go to the nth meta-level indefinitely. If they do so, I figure they’d eventually reach the point where the network they’ve built for expanding effective altruism would become unmanageable, and the members of that network wouldn’t coordinate or gain the self-awareness to know what to do with themselves. So, the whole thing would unwind. While not all the value of the meta-project would be undone in such a case, I think there would be sufficient collapse or loss that whatever initial costs to start and ongoing costs to maintain the project would be unjustified, and counterfactully would have done more good at some lower level of organization. Whether that level would be the object-level, e.g., just donating to AMF, or only one meta-level up, merely fundraising for AMF, there would’ve been a point the managers should’ve known to stop the constant abstaction of the project.
I think the solution to both these problems is, would be, and will need to be greater accountability and oversight. Major donors to EA meta-projects might want to see a laid-out plan for operational goals for the next year, a budgetary breakdown of anticipated funding necessary for the goals, and a detailed layout of how they’ve done such in the past, to demonstrate their track record of reliability. Charity Science does all that, right? This is a cross between the proposal to fix science by registersing the hypothesis before the study is conducted, and a compnay transparently providing information to assure its investors the company’s executives are making the best choices and best they can. If so, I think every other EA meta-project or meta-charity should be expected to do the same. I’d be happy to help normalize this trend. The best way to do that would be to explain how I donated to, e.g., Charity Science instead of CEA or Givewell or something based on Charity Science making abundantly clear their operational goals and what they expected to achieve relative to other organizations. I don’t have enough money to donate right now to justify that, and likely won’t in the next year, so I can’t do that. I encourage others, such as yourself, Peter, to do that more often. I’d lend my moral, vocal, or other support in the present, though.
Also, I figure meta-projects or meta-charities should be incentivized, in addition to the above, to preregister what their low-ball, average, and stretch goals are for the year, with as quantified a caliber of confidence as they can muster. Incenitivizing this could be faciliated by an EA prediction market or other mechanisms of moral economics that have been recently discussed on this forum. A prediction maket could work in calibrating the expectations of a project by way of the best forecasters in the market making their own predictions of the projected success of the meta-project. If all the best forecasters, with their proven track records, independently converged on the conclusion the scope of the project was overconfident, the project managers would be induced to temper their overconfidence, and, e.g., ask for less fudnign than they can optimally use. To get an organization to change their behavior in the face of such a prediction-market scenario, I figure they might need be incentivized with rewards for updating in the right direction. I can’t think of any right now besides assurance they’d receive the appropriate level of funding (corresponding to their most realistic goals).
Finally, it seems to me how internally well-connected the existing network that is effective altruism is just as important for facilitating an increase in valuable object-level work as growing the movement. I call the former, increasing the value of internal networking, growing stonger, and the latter, growing the network as a whole, I call growing bigger. This is a distinction of the ways effective altruists use the phrase “movement growth” in different ways. This distinction was first made clear to me at the 2013 EA Summit. “Growing stronger” seemed the approach to movement development favored by Anna Salamon and CFAR, “growing stronger” the approached favored by the CEA, and a combination of both a strategy seemingly favored by Geoff Anders and Leverage Research. I think managing and improving the internal strength of the community as is, and how we connect and collaborate, is just as or more important than increasing the absolute size of effective altruism. Another way of thinking of this is: increasing absolute impact vs. increasing impact per unit effort expended. My recent spate of proposals to and engagement with .impact has been motivated by facilitating movement development via increasing the utility of the current network.
Also, I figure meta-projects or meta-charities should be incentivized, in addition to the above, to preregister what their low-ball, average, and stretch goals are for the year, with as quantified a caliber of confidence as they can muster.
I agree. Meta-projects are inherently difficult to evaluate, but I do think we’re not spending nearly as much time or money on such things as we could.
Edit: this essay is great and I’m excited and I wanted to build on what Peter wrote so much I didn’t even finish reading the whole thing before I’ve started in great volume commenting on each individual point. I believe I’ve gone over something Peter already covered in the OP, before I realized it. I’ll edit that out for brevity, but forgive me if I miss something and I’m just needlessly repeating Peter.
[Epistemic Status: heady, giddy and rapid hypothesis generation]
I perceive two pitfalls here. Firstly, logistics and administration may become more difficult as the meta-project grows. For however many levels n an organization goes meta, such that n is the number of steps removed the management of the project in question is from the object-level goals of effective altruism, there is going to be more people, information, and organizations to keep up with. As a project gets more meta, it will become more difficult to convince effective altruists in general to increase its funding so it can scale, or become even more meta. So, a project that from the start seeks to go more meta in an unrestrained way will also face constant talent and financial constraints. As it go more meta, it will be difficult to find fitting employees, and due to how abstract the project would become, it would become difficult to explain in the first place how the project would work, let alone how skeptical said funders would otherwise be. If a project receives all their funding from one major donor, or a single consortium of donors, the project managers risk losing the independence to run their project as they see fit without being held back by constant questions or donors/directors steering the project in a new direction. This is why, e.g., Givewell doesn’t want to receive all its funding from Good Ventures.
To run a meta-project in a nimble way which reacts quickly to changing circumstances and steep learning curves seems necessary for running meta-projects, as they’re almost always breaking new ground in the non-profit sector and/or effective altruism when they’re founded. So, they can’t risk losing their independence by courting only one donor who may go on seekign to steer the ship themselves. If the meta-project in question was constrained to effective altruism, and it was otherwise facing financial constraints, I’d be skeptical of any claim they could find sufficient funding by going outside the effective altruism community.
Second, there is a temptation for meta-projects to go to the nth meta-level indefinitely. If they do so, I figure they’d eventually reach the point where the network they’ve built for expanding effective altruism would become unmanageable, and the members of that network wouldn’t coordinate or gain the self-awareness to know what to do with themselves. So, the whole thing would unwind. While not all the value of the meta-project would be undone in such a case, I think there would be sufficient collapse or loss that whatever initial costs to start and ongoing costs to maintain the project would be unjustified, and counterfactully would have done more good at some lower level of organization. Whether that level would be the object-level, e.g., just donating to AMF, or only one meta-level up, merely fundraising for AMF, there would’ve been a point the managers should’ve known to stop the constant abstaction of the project.
I think the solution to both these problems is, would be, and will need to be greater accountability and oversight. Major donors to EA meta-projects might want to see a laid-out plan for operational goals for the next year, a budgetary breakdown of anticipated funding necessary for the goals, and a detailed layout of how they’ve done such in the past, to demonstrate their track record of reliability. Charity Science does all that, right? This is a cross between the proposal to fix science by registersing the hypothesis before the study is conducted, and a compnay transparently providing information to assure its investors the company’s executives are making the best choices and best they can. If so, I think every other EA meta-project or meta-charity should be expected to do the same. I’d be happy to help normalize this trend. The best way to do that would be to explain how I donated to, e.g., Charity Science instead of CEA or Givewell or something based on Charity Science making abundantly clear their operational goals and what they expected to achieve relative to other organizations. I don’t have enough money to donate right now to justify that, and likely won’t in the next year, so I can’t do that. I encourage others, such as yourself, Peter, to do that more often. I’d lend my moral, vocal, or other support in the present, though.
Also, I figure meta-projects or meta-charities should be incentivized, in addition to the above, to preregister what their low-ball, average, and stretch goals are for the year, with as quantified a caliber of confidence as they can muster. Incenitivizing this could be faciliated by an EA prediction market or other mechanisms of moral economics that have been recently discussed on this forum. A prediction maket could work in calibrating the expectations of a project by way of the best forecasters in the market making their own predictions of the projected success of the meta-project. If all the best forecasters, with their proven track records, independently converged on the conclusion the scope of the project was overconfident, the project managers would be induced to temper their overconfidence, and, e.g., ask for less fudnign than they can optimally use. To get an organization to change their behavior in the face of such a prediction-market scenario, I figure they might need be incentivized with rewards for updating in the right direction. I can’t think of any right now besides assurance they’d receive the appropriate level of funding (corresponding to their most realistic goals).
Finally, it seems to me how internally well-connected the existing network that is effective altruism is just as important for facilitating an increase in valuable object-level work as growing the movement. I call the former, increasing the value of internal networking, growing stonger, and the latter, growing the network as a whole, I call growing bigger. This is a distinction of the ways effective altruists use the phrase “movement growth” in different ways. This distinction was first made clear to me at the 2013 EA Summit. “Growing stronger” seemed the approach to movement development favored by Anna Salamon and CFAR, “growing stronger” the approached favored by the CEA, and a combination of both a strategy seemingly favored by Geoff Anders and Leverage Research. I think managing and improving the internal strength of the community as is, and how we connect and collaborate, is just as or more important than increasing the absolute size of effective altruism. Another way of thinking of this is: increasing absolute impact vs. increasing impact per unit effort expended. My recent spate of proposals to and engagement with .impact has been motivated by facilitating movement development via increasing the utility of the current network.
I agree. Meta-projects are inherently difficult to evaluate, but I do think we’re not spending nearly as much time or money on such things as we could.
Also, thanks for all your feedback, Evan. I’m glad you liked it.