I agree with some of the points of this post, but I do think there is a dynamic here that is missing, that I think is genuinely important.
Many people in EA have pursued resource-sharing strategies where they pick up some piece of the problems they want to solve, and trust the rest of the community to handle the other parts of the problem. One very common division of labor here is
I will go and do object-level work on our core priorities, and you will go and make money, or fundraise from other people in order to fund that work
I think a lot of this type of trade has happened historically in EA. I have definitely forsaken a career with much greater earning potential than I have right now in order to contribute to EA infrastructure and to work on object-level problems.
I think it is quite important to recognize that in as much as a trade like this has happened, this gives the people who have done object level work a substantial amount of ownership over the funds that other people have earned, as well as the funds that other people have fundraised (I also think this applies to Open Phil, though I think the case here is bunch messier and I won’t go into my models of the game theory here in-detail). Since the person who owns the funds has the ability to defect at basically any time on the arrangement and just do direct work themselves, the person who has been doing the object-level work so far has no ability to defect in the same way, and so this trade relies on the person doing object-level work trusting the person who made money to keep their future promise to act in both parties best interest.
My current guess is that the majority of EAs impact is downstream of trades like this, so taking this into account is a pretty huge deal in my books. For example, I think me being able to specialize into building infrastructure for the community, while trusting that I get to maintain some ability to direct the EA portfolio, was I think a huge multiplier on my impact in the world.
That means that I do think that a lot of the funds that have been raised within EA, though definitely not all of them, are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities, and not the people in whose bank account the money is technically located.
To make the case clearer, I think there are many people who have forsaken a path in industry where they could have been quite successful entrepreneurs who could make many millions of dollars, and they do not currently have direct control over millions of dollars.
Overall I think the current balance of funds not reflecting this is a mistake, and I think in the world where this trade is working well, people who we think are responsible for a lot of the biggest positive impact would have been given hundreds of millions of dollars in exchange for that impact, and the ownership over the funds would be more clear. I am somewhat optimistic that more things like this will happen in the future as things like impact-certificates might take off more, which try to make this whole situation less fuzzy and more concrete.
Historically EA had a culture of the people running successful EA organizations not really being allowed to get rich from them running them (partially for valid signaling and grifter-related reasons), but this does mean that the current balance of funds does not reflect the fair and tacitly-agreed on allocation of funds, and this is a pretty precarious situation.
However, this does not mean that I think the money should be straightforwardly democratically allocated. I think the balance of funds and other sources of power in EA should roughly represent the balance of past positive impact that people have achieved (which includes the positive impact from making and fundraising money). I think given the heavy-tailedness of impact this also represents a very non-democratic allocation of funds, but it does meaningfully differ from putting the ownership clearly in the hands of the donors.
Of course, there are many donors who feel like they have not participated in any trade like this, though I think in most of those situations, I think it’s then the right call to charge a substantial surcharge on the literal cost of labor of someone doing object-level work, so that over time the cost for buying the altruistic impact does not just reflect the marginal cost of labor, but also (at least) the counterfactual cost of the people doing object-level work having abstained from a financially lucrative career.
As a concrete example of what this line of reasoning leads you to, you can take a look at the Lightcone Infrastructure salary policy. Our salary policy is that “we will pay you whatever we think you could have made in industry, minus 30%”. This compensation structure means we will pay many people salaries quite substantially above what they need to live. The 30% number is trying to find a roughly fair split of sacrifice between the donor and the worker, where the donor is paying 70% of market rate of the worker, and the worker is giving up 30% of their salary, together making progress towards the shared goal. This number is skewed towards the donor because salary doesn’t really capture most of the variance in income, since income is heavy-tailed and “salary” is anchored on the median outcome, and also because donors are selected from the pool of people who got lucky in the entrepreneurial lottery, and the balance they pay needs to also account for all the people who tried to make money for EA and failed.
My guess is the 70⁄30 split here is fairer, though still overall skewed quite a bit against the workers, but it’s at least an attempt to make the actual allocation of funds reflect the fair allocation better.
I think this means overall that saying that “the EA community does not own its donors’ money” is pretty inaccurate, though of course is still tracking something important. I do indeed think that many people who have done highly impactful work in the EA community have a quite strong and direct claim of ownership over the funds that have been made by various entrepreneurs who did earning-to-give, as well as various megadonors and pots of funds like Open Philanthropy’s endowment. I think in an ideal world this balance of funds and power would be made more explicit by having something like impact-certificate markets with evaluations from current donors, but we are pretty far from that, and in the meantime I do really care about not wrongly enshrining the meme that ignores all the past trades of division-of-labor that have happened (and are happening on a daily level).
I find this framing a bit confusing. It doesn’t to me seem that there is any obligation between EAs who are pursuing direct work and EA funders than there is between EA funders and any other potentially effective program that needs money.
Consider:
Alice gives up a lucrative career in order to go work in global health and development. However, Alice ends up just working for an ineffective organisation.
Bob is not an EA and never was in line for a high paying career, but ends up working for GiveDirectly.
Do we want to say that we should try and fund Alice more because we made some kind of implicit deal with her even though that money won’t produce effective good in the world? And if the money is conditional on the person doing good work… how is that different from the funders just funding good work without consideration for who’s doing it?
If I were going to switch to direct work the deal I would expect is “There is a large group of value-aligned funders, so insofar as I do work that seems high-impact, I can expect to get funded”. I would not expect that the community was going to somehow ensure I got well above median outcomes for the career area I was going into.
I’m not sure why you would expect more than that? Is this a talking point that I missed where people argued that there’s a deal? Is this some kind of clever “act as though you made the deals you would have wanted to have made, even if you actually didn’t” thing?
I think in an ideal world this balance of funds and power would be made more explicit by having something like impact-certificate markets with evaluations from current donors
I absolutely agree that altruistic labour is under-paid and a system like impact certificates would be great, what confuses me is the idea that people got into direct work today with the expectation that the community was committing to make something like that happen today.
I think it could make sense in various instances to form a trade agreement between people earning and people doing direct work, where the latter group has additional control over how resources are spent.
It could also make sense to act like that trade agreement which was not in fact made was in fact made, if that incentivises people to do useful direct work.
But if this trade has never in fact transpired, explicitly or tacitly, I see no sense in which these resources “are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities.”
Great comment. I think “people who sacrifice significantly higher salaries to do EA work” is a plausible minimum definition of who those calling for democratic reforms feel deserve a greater say in funding allocation. It doesn’t capture all of those people, nor solve the harder question of “what is EA work/an EA organization?” But it’s a start.
Your 70⁄30 example made me wonder whether redesigning EA employee compensation packages to include large matching contributions might help as a democratizing force. Many employers outside EA/in the private sector offer a matching contributions program, wherein they’ll match something like 1-5% of your salary (or up to a certain dollar value) in contributions to a certified nonprofit of your choosing. Maybe EA organizations (whichever voluntarily opt into this) could do that except much bigger—say, 20-50% of your overall compensation is paid not to you but to a charity of your choosing. This could also be tied to tenure so that the offered match increases at a faster rate than take home pay, reflecting the intuition that committed longtime members of the EA community have engaged with the ideas more, and potentially sacrificed more, and consequently deserve a stronger vote than newcomers.
Ex: Sarah’s total compensation is 100k, of which she takes home 80k and her employer offers an additional 20k to a charity of her choosing. After 2 years working there, her total package jumps to 120k, of which she takes home 88k and allocates another 32k. After 10 years she takes home 110 and allocates another 90, etc. This tenure could be interchangeable across participating organizations. With time, it may even resemble the “impact certificates” you mention.
Employers could limit this match to a prespecified list of plausibly EA recipients if they wish. Employees could accept this arrangement en lieu of giving X% of their personal incomes (which has the added benefit of avoiding taxation on “income” that’s only going to be given away to largely tax-deductible organizations anyway). Employees could also elect to give a certain amount back to their employing organization, which some presumably would since people tend to believe in the importance of work they are doing. We could write software to anonymize these donations, and avoid any fear of recrimination for NOT regifting it to the employing org.
One downside could be making it more expensive for EA organizations to hire, and thus harder for them to grow and harder for individual EAs to find an EA job. It also wouldn’t solve the fact that the resources controlled by EA organizations are not proportional to the number of people they employ, especially at the extremes. Perhaps if mega-donors like Dustin are open to democratization but wary of how to define the EA electorate, they’d support higher grants to participating recipients, on the logic that “if they’re EA enough to deserve my grant for X effective project, they’re EA enough to deserve a say in how some of my other money is spent too” (even beyond what they need for X).
For all I know EA organizations may have something like this already. If anyone has toyed with or tried to implement this idea before, I’d love to hear about it.
Another option would be to just directly give EA org employees regranting funds, with no need for them to donate their own money to regrant them. However, requiring some donation, maybe matching at a high rate, e.g. 5:1, gets them to take on at least some personal cost to direct funding.
Also, the EA org doesn’t need to touch the money. The org can just confirm employment, and the employee can regrant through a system Open Phil (or GWWC) sets up or report a donation for matching to Open Phil (or GWWC, with matching funds provided by Open Phil).
Thank you for this. These are very interesting points. I have two (lightly held) qualms with this that I’m not sure obtain.
I suspect in the status quo, highly engaged, productive EAs who do work like yours do have a certain amount of influence over funding decisions. It certainly seems like most Future Fund regrantors fit into this pool. Obviously I don’t mean to imply everyone has the influence they deserve, but I do think this is meaningful when we consider the current state of EA vs a potential new system.
I worry this attitude also plays into some potentially harmful dynamics, where each EA feels like they have ownership over and responsibility for the entirety of EA. This may fuel things like a community organizer feeling the weight of every EA controversy on their own shoulders (I don’t know what was at play in that specific case) or an enthusiastic but naive 15 year old who feels that they deserve to make funding decisions because they have a forums account. Perhaps there could be some sort of demarcation between people who are actually making this trade with a willing counterparty(ies) from anyone who is currently working an EA job or otherwise associated.
Again, just the thoughts that came to mind, both tentative.
As an example: I specifically chose to start working on AI alignment rather than trying to build startups to try to fund EA because of SBF. I would probably be making a lot more money had I took a different route and would likely not have to deal with being in such a shaky, intense field where I’ve had to put parts of my life on hold for.
Yeah, I agree. Though I feel I can imagine a lot of startups or businesses that require a lot of work, but don’t require as much brain power. I could have chosen a sector that doesn’t move as fast as AI (which has an endless stream of papers and posts to keep track of) and just requires me to build a useful product/service for people. Being in a pre-paradigmatic field where things feel like they are moving insanely fast can feel overwhelming.
I don’t know, being in a field that I think feels much more high-risk and forces me to grapple with difficult concepts every day is exhausting. I could be wrong, but I doubt I’d feel this level of exhaustion if I built an ed-tech startup (while still difficult and the market is shaky).
(Actually, one of my first cash-grabs would have been to automate digital document stuff in government since I worked in it and have contacts. I don’t think I’d feel the same intensity and shaky-ness tackling something like that since that’s partially what I did when I was there. Part of my strategy was going to be to build something that can make me a couple million when I sell it, and then go after something bigger once I have some level of stability.)
Part of what I meant by “shaky-ness” is maybe that there’s a potential for higher monetary upside with startups and so if I end up successful, there’s a money safety net I can rely on (though there’s certainly a period of shaky-ness when you start). And building a business can basically be done anywhere while for alignment I might be ‘forced’ to head to a hub I don’t want to move to.
Then again, I could be making a bigger deal about alignment due to bias of being in the field.
I took OP to be talking about major donors with independent wealth (not earning to give or GWWC donations wealth) but I think this is a good point. It could be used to argue that the community should have had more control over SBF’s funds since he choose to focus on earning so he could contribute to EA that way.
I think the case of OP and SBF are very different.Alameda was set up with a lot of help from EA and expected to donate a lot if not most to EA causes. Whereas Dustin made his wealth without help from EA.
I also think this applies somewhat to Open Phil, though it’s messy and I honestly feel confused a bit confused about it.
I do think that overall I would say that either Open Phil should pay at least something close to market rate for the labor in various priority areas (which for a chunk of people would be on the order of millions per year), or that they should give up some control over the money and take a stance towards the broader community that allows people with a strong track record of impact to direct funds relatively unilaterally.
In as much as neither of that happens, I would take that as substantial evidence that I would want more EAs to go into earning to give who would be happy to take that trade, and I would encourage people doing direct work to work less unless someone pays them a higher salary, though again, the details of the game theory here get messy really quickly, and I don’t have super strong answers.
This kind of deal makes sense, but IMO it would be better for it to be explicit than implicit, by actually transferring money to people with a lot of positive impact (maybe earmarked for charity), perhaps via higher salaries, or something like equity.
FWIW this loss of control over resources was a big negative factor when I last considered taking an EA job. It made me wonder whether the claims of high impact were just cheap talk (actually transferring control over money is a costly signal).
FWIW, I think this might be reasonable to expect if OP was implementing FDT/known to be implementing FDT. But they aren’t, and so I’d be very hesitant to rely/expect on FDT-like promises (and indeed that is one factor which could push me towards making money rather than direct work), particularly since they aren’t enforceable.
OpenPhil is pretty into decision theory stuff like this. I think some of them usually think more in EDT type thinking, but that checks out to the same here. I am pretty confident most of the OpenPhil longtermist team does not endorse CDT, and are quite open to considerations like this.
I agree with some of the points of this post, but I do think there is a dynamic here that is missing, that I think is genuinely important.
Many people in EA have pursued resource-sharing strategies where they pick up some piece of the problems they want to solve, and trust the rest of the community to handle the other parts of the problem. One very common division of labor here is
I think a lot of this type of trade has happened historically in EA. I have definitely forsaken a career with much greater earning potential than I have right now in order to contribute to EA infrastructure and to work on object-level problems.
I think it is quite important to recognize that in as much as a trade like this has happened, this gives the people who have done object level work a substantial amount of ownership over the funds that other people have earned, as well as the funds that other people have fundraised (I also think this applies to Open Phil, though I think the case here is bunch messier and I won’t go into my models of the game theory here in-detail). Since the person who owns the funds has the ability to defect at basically any time on the arrangement and just do direct work themselves, the person who has been doing the object-level work so far has no ability to defect in the same way, and so this trade relies on the person doing object-level work trusting the person who made money to keep their future promise to act in both parties best interest.
My current guess is that the majority of EAs impact is downstream of trades like this, so taking this into account is a pretty huge deal in my books. For example, I think me being able to specialize into building infrastructure for the community, while trusting that I get to maintain some ability to direct the EA portfolio, was I think a huge multiplier on my impact in the world.
That means that I do think that a lot of the funds that have been raised within EA, though definitely not all of them, are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities, and not the people in whose bank account the money is technically located.
To make the case clearer, I think there are many people who have forsaken a path in industry where they could have been quite successful entrepreneurs who could make many millions of dollars, and they do not currently have direct control over millions of dollars.
Overall I think the current balance of funds not reflecting this is a mistake, and I think in the world where this trade is working well, people who we think are responsible for a lot of the biggest positive impact would have been given hundreds of millions of dollars in exchange for that impact, and the ownership over the funds would be more clear. I am somewhat optimistic that more things like this will happen in the future as things like impact-certificates might take off more, which try to make this whole situation less fuzzy and more concrete.
Historically EA had a culture of the people running successful EA organizations not really being allowed to get rich from them running them (partially for valid signaling and grifter-related reasons), but this does mean that the current balance of funds does not reflect the fair and tacitly-agreed on allocation of funds, and this is a pretty precarious situation.
However, this does not mean that I think the money should be straightforwardly democratically allocated. I think the balance of funds and other sources of power in EA should roughly represent the balance of past positive impact that people have achieved (which includes the positive impact from making and fundraising money). I think given the heavy-tailedness of impact this also represents a very non-democratic allocation of funds, but it does meaningfully differ from putting the ownership clearly in the hands of the donors.
Of course, there are many donors who feel like they have not participated in any trade like this, though I think in most of those situations, I think it’s then the right call to charge a substantial surcharge on the literal cost of labor of someone doing object-level work, so that over time the cost for buying the altruistic impact does not just reflect the marginal cost of labor, but also (at least) the counterfactual cost of the people doing object-level work having abstained from a financially lucrative career.
As a concrete example of what this line of reasoning leads you to, you can take a look at the Lightcone Infrastructure salary policy. Our salary policy is that “we will pay you whatever we think you could have made in industry, minus 30%”. This compensation structure means we will pay many people salaries quite substantially above what they need to live. The 30% number is trying to find a roughly fair split of sacrifice between the donor and the worker, where the donor is paying 70% of market rate of the worker, and the worker is giving up 30% of their salary, together making progress towards the shared goal. This number is skewed towards the donor because salary doesn’t really capture most of the variance in income, since income is heavy-tailed and “salary” is anchored on the median outcome, and also because donors are selected from the pool of people who got lucky in the entrepreneurial lottery, and the balance they pay needs to also account for all the people who tried to make money for EA and failed.
My guess is the 70⁄30 split here is fairer, though still overall skewed quite a bit against the workers, but it’s at least an attempt to make the actual allocation of funds reflect the fair allocation better.
I think this means overall that saying that “the EA community does not own its donors’ money” is pretty inaccurate, though of course is still tracking something important. I do indeed think that many people who have done highly impactful work in the EA community have a quite strong and direct claim of ownership over the funds that have been made by various entrepreneurs who did earning-to-give, as well as various megadonors and pots of funds like Open Philanthropy’s endowment. I think in an ideal world this balance of funds and power would be made more explicit by having something like impact-certificate markets with evaluations from current donors, but we are pretty far from that, and in the meantime I do really care about not wrongly enshrining the meme that ignores all the past trades of division-of-labor that have happened (and are happening on a daily level).
I find this framing a bit confusing. It doesn’t to me seem that there is any obligation between EAs who are pursuing direct work and EA funders than there is between EA funders and any other potentially effective program that needs money.
Consider:
Alice gives up a lucrative career in order to go work in global health and development. However, Alice ends up just working for an ineffective organisation.
Bob is not an EA and never was in line for a high paying career, but ends up working for GiveDirectly.
Do we want to say that we should try and fund Alice more because we made some kind of implicit deal with her even though that money won’t produce effective good in the world? And if the money is conditional on the person doing good work… how is that different from the funders just funding good work without consideration for who’s doing it?
If I were going to switch to direct work the deal I would expect is “There is a large group of value-aligned funders, so insofar as I do work that seems high-impact, I can expect to get funded”. I would not expect that the community was going to somehow ensure I got well above median outcomes for the career area I was going into.
I’m not sure why you would expect more than that? Is this a talking point that I missed where people argued that there’s a deal? Is this some kind of clever “act as though you made the deals you would have wanted to have made, even if you actually didn’t” thing?
I absolutely agree that altruistic labour is under-paid and a system like impact certificates would be great, what confuses me is the idea that people got into direct work today with the expectation that the community was committing to make something like that happen today.
I think it could make sense in various instances to form a trade agreement between people earning and people doing direct work, where the latter group has additional control over how resources are spent.
It could also make sense to act like that trade agreement which was not in fact made was in fact made, if that incentivises people to do useful direct work.
But if this trade has never in fact transpired, explicitly or tacitly, I see no sense in which these resources “are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities.”
Great comment. I think “people who sacrifice significantly higher salaries to do EA work” is a plausible minimum definition of who those calling for democratic reforms feel deserve a greater say in funding allocation. It doesn’t capture all of those people, nor solve the harder question of “what is EA work/an EA organization?” But it’s a start.
Your 70⁄30 example made me wonder whether redesigning EA employee compensation packages to include large matching contributions might help as a democratizing force. Many employers outside EA/in the private sector offer a matching contributions program, wherein they’ll match something like 1-5% of your salary (or up to a certain dollar value) in contributions to a certified nonprofit of your choosing. Maybe EA organizations (whichever voluntarily opt into this) could do that except much bigger—say, 20-50% of your overall compensation is paid not to you but to a charity of your choosing. This could also be tied to tenure so that the offered match increases at a faster rate than take home pay, reflecting the intuition that committed longtime members of the EA community have engaged with the ideas more, and potentially sacrificed more, and consequently deserve a stronger vote than newcomers.
Ex: Sarah’s total compensation is 100k, of which she takes home 80k and her employer offers an additional 20k to a charity of her choosing. After 2 years working there, her total package jumps to 120k, of which she takes home 88k and allocates another 32k. After 10 years she takes home 110 and allocates another 90, etc. This tenure could be interchangeable across participating organizations. With time, it may even resemble the “impact certificates” you mention.
Employers could limit this match to a prespecified list of plausibly EA recipients if they wish. Employees could accept this arrangement en lieu of giving X% of their personal incomes (which has the added benefit of avoiding taxation on “income” that’s only going to be given away to largely tax-deductible organizations anyway). Employees could also elect to give a certain amount back to their employing organization, which some presumably would since people tend to believe in the importance of work they are doing. We could write software to anonymize these donations, and avoid any fear of recrimination for NOT regifting it to the employing org.
One downside could be making it more expensive for EA organizations to hire, and thus harder for them to grow and harder for individual EAs to find an EA job. It also wouldn’t solve the fact that the resources controlled by EA organizations are not proportional to the number of people they employ, especially at the extremes. Perhaps if mega-donors like Dustin are open to democratization but wary of how to define the EA electorate, they’d support higher grants to participating recipients, on the logic that “if they’re EA enough to deserve my grant for X effective project, they’re EA enough to deserve a say in how some of my other money is spent too” (even beyond what they need for X).
For all I know EA organizations may have something like this already. If anyone has toyed with or tried to implement this idea before, I’d love to hear about it.
I think this is a proposal worth exploring. Open Phil could earmark additional funding to orgs for employee donation matching.
Another option would be to just directly give EA org employees regranting funds, with no need for them to donate their own money to regrant them. However, requiring some donation, maybe matching at a high rate, e.g. 5:1, gets them to take on at least some personal cost to direct funding.
Also, the EA org doesn’t need to touch the money. The org can just confirm employment, and the employee can regrant through a system Open Phil (or GWWC) sets up or report a donation for matching to Open Phil (or GWWC, with matching funds provided by Open Phil).
Thank you for this. These are very interesting points. I have two (lightly held) qualms with this that I’m not sure obtain.
I suspect in the status quo, highly engaged, productive EAs who do work like yours do have a certain amount of influence over funding decisions. It certainly seems like most Future Fund regrantors fit into this pool. Obviously I don’t mean to imply everyone has the influence they deserve, but I do think this is meaningful when we consider the current state of EA vs a potential new system.
I worry this attitude also plays into some potentially harmful dynamics, where each EA feels like they have ownership over and responsibility for the entirety of EA. This may fuel things like a community organizer feeling the weight of every EA controversy on their own shoulders (I don’t know what was at play in that specific case) or an enthusiastic but naive 15 year old who feels that they deserve to make funding decisions because they have a forums account. Perhaps there could be some sort of demarcation between people who are actually making this trade with a willing counterparty(ies) from anyone who is currently working an EA job or otherwise associated.
Again, just the thoughts that came to mind, both tentative.
As an example: I specifically chose to start working on AI alignment rather than trying to build startups to try to fund EA because of SBF. I would probably be making a lot more money had I took a different route and would likely not have to deal with being in such a shaky, intense field where I’ve had to put parts of my life on hold for.
That’s startups too, no?
Yeah, I agree. Though I feel I can imagine a lot of startups or businesses that require a lot of work, but don’t require as much brain power. I could have chosen a sector that doesn’t move as fast as AI (which has an endless stream of papers and posts to keep track of) and just requires me to build a useful product/service for people. Being in a pre-paradigmatic field where things feel like they are moving insanely fast can feel overwhelming.
I don’t know, being in a field that I think feels much more high-risk and forces me to grapple with difficult concepts every day is exhausting. I could be wrong, but I doubt I’d feel this level of exhaustion if I built an ed-tech startup (while still difficult and the market is shaky).
(Actually, one of my first cash-grabs would have been to automate digital document stuff in government since I worked in it and have contacts. I don’t think I’d feel the same intensity and shaky-ness tackling something like that since that’s partially what I did when I was there. Part of my strategy was going to be to build something that can make me a couple million when I sell it, and then go after something bigger once I have some level of stability.)
Part of what I meant by “shaky-ness” is maybe that there’s a potential for higher monetary upside with startups and so if I end up successful, there’s a money safety net I can rely on (though there’s certainly a period of shaky-ness when you start). And building a business can basically be done anywhere while for alignment I might be ‘forced’ to head to a hub I don’t want to move to.
Then again, I could be making a bigger deal about alignment due to bias of being in the field.
I took OP to be talking about major donors with independent wealth (not earning to give or GWWC donations wealth) but I think this is a good point. It could be used to argue that the community should have had more control over SBF’s funds since he choose to focus on earning so he could contribute to EA that way.
I think the case of OP and SBF are very different.Alameda was set up with a lot of help from EA and expected to donate a lot if not most to EA causes. Whereas Dustin made his wealth without help from EA.
I also think this applies somewhat to Open Phil, though it’s messy and I honestly feel confused a bit confused about it.
I do think that overall I would say that either Open Phil should pay at least something close to market rate for the labor in various priority areas (which for a chunk of people would be on the order of millions per year), or that they should give up some control over the money and take a stance towards the broader community that allows people with a strong track record of impact to direct funds relatively unilaterally.
In as much as neither of that happens, I would take that as substantial evidence that I would want more EAs to go into earning to give who would be happy to take that trade, and I would encourage people doing direct work to work less unless someone pays them a higher salary, though again, the details of the game theory here get messy really quickly, and I don’t have super strong answers.
I think this is a really good point. Not sure I agree on the magnitude of the funds owned in this way but I think that it’s a good intuition.
This kind of deal makes sense, but IMO it would be better for it to be explicit than implicit, by actually transferring money to people with a lot of positive impact (maybe earmarked for charity), perhaps via higher salaries, or something like equity.
FWIW this loss of control over resources was a big negative factor when I last considered taking an EA job. It made me wonder whether the claims of high impact were just cheap talk (actually transferring control over money is a costly signal).
FWIW, I think this might be reasonable to expect if OP was implementing FDT/known to be implementing FDT. But they aren’t, and so I’d be very hesitant to rely/expect on FDT-like promises (and indeed that is one factor which could push me towards making money rather than direct work), particularly since they aren’t enforceable.
OpenPhil is pretty into decision theory stuff like this. I think some of them usually think more in EDT type thinking, but that checks out to the same here. I am pretty confident most of the OpenPhil longtermist team does not endorse CDT, and are quite open to considerations like this.