I think the appropriate cost to use for evaluators, applicants, and admins is the opportunity cost of their time. For many such people this would be considerably higher than their wage and outside the ranges used in the model. I don’t know that this would change your conclusion, but it could significantly affect the numbers.
blonergan
I find it disappointing that he tries to use EA as a shield (p 17, “As a believer in the Effective Altruism movement, my primary goal has never been personal enrichment; I’m motivated by a commitment to help bring happiness and alleviate suffering for others.”) This is in the context of denying that he has billions of dollars stashed away. If he really cared about bringing happiness and alleviating suffering, why would he further tarnish the EA community’s reputation by associating himself with it in testimony before Congress?
I interpreted that as meaning that a $1,000 cash transfer costs a bit more than $1,000, including the direct cost of the cash transfer itself. So, something like $100 of delivery costs would mean that a $1,000 cash transfer would have a total cost of around $1,100.
Here HLI comes up with $1,170 as the total cost of a $1,000 cash transfer, which seems reasonably close to your numbers.
Keeping the set of EA org employees fixed, and paying them more, I think higher salaries have three effects:
EA org employees will donate more. This portion is a sort of regranting mechanism. I would expect such people to be effective regranters, so this feels like a small win.
EA org employees will practice better self-care and invest in things that save them time and allow them to work more. They will be more productive as a result. Given the scarcity of talent, this feels like a big win.
EA org employees will have higher standards of living. This feels like a net loss, given the potential alternative uses of funds.
My intuition is that EA org salaries are low enough, and talent is scarce enough, that (2) probably dominates.
There’s also the consideration of how the set of people working at EA orgs will change.
More people will be willing to work for EA orgs.
The set of people who become willing to work for EA orgs as salaries go up will be different from the people willing to work at lower salaries.
Keeping the set of EA org jobs fixed, the pool of people willing to take those jobs will expand. I would guess with higher salaries the people hired would tend to be more talented, and less “totalising” in their commitment to EA. The former seems good, whereas the latter seems bad for some roles, but perhaps good for others. I think it’s important to recognize that people’s willingness to work for a low salary depends on many factors. In particular, families (parents, spouses, children) can be significant financial resources or burdens. So low salaries are an imperfect way to filter for level of commitment.
And, given the relative scarcity of EA talent vs. funding, making EA org work more attractive (relative to earning to give) seems valuable to me. With higher salaries the number of EA org jobs will tend to expand due to an increased supply of workers, which seems good on the margin.
I took your post seriously and had an extended exchange with you in the comments section. I indicated that I shared some of your concerns. I also expressed that I thought you had mischaracterized some of SBF’s views about bitcoin and other cryptocurrencies. It appears that you have since edited the post to correct some of those mischaracterizations, but you did not acknowledge having done so, best I can tell.
I also disagreed with your view that many good projects would lose funding if there were a crypto downturn. Unfortunately, with FTX collapsing so abruptly, there is a risk of that happening. I am hopeful that other donors will step up to fund the highest value projects funded by FTX, but this is a real challenge we face as a community.
I’m puzzled by your statement in this new post that “It was quite obvious that this would happen...” There was certainly a risk things could go badly, and I think I personally underestimated the risk, but I don’t think it is credible to say that it was obvious.
Another EA connection is that Samantha Power, the USAID Administrator who appointed Dean Karlan, is married to Cass Sunstein, who has spoken at EA Global and was once a guest on the 80,000 Hours podcast.
I think the optimal level of reserves could vary significantly across organizations. In some cases, having a high level of reserves could make it easier to attract and retain key senior staff members. A 20-something EA might feel comfortable going to work for an org with a short runway, but someone mid-career with a family and who is asked to relocate might feel differently. Institutions and individuals might also be more inclined to collaborate with an organization that appears likely to be around for a while.
Here are a couple of other links that come to mind:
https://arxiv.org/abs/2008.02275
https://www.brookings.edu/research/aligned-with-whom-direct-and-social-goals-for-ai-systems/
I would be interested to see results from a similar experiment where the groups were given access to the “Bad Llama” model, or given the opportunity to create their own version by re-tuning Llama 2 or another open source model. I don’t have a strong prior as to whether such a model would help the groups to develop more dangerous plans.
Thanks for your reply. I do think it would be unusual to see such promises, particularly from a firm looking for large investments. And I would expect to see a bunch of disclaimers, as you suggest. There might have been such language in the actual investment documents, but still. The excerpt shared on Twitter would have set off red flags for me because it seems sloppy and unprofessional, and it would have made me particularly concerned about their risk management, but I wouldn’t have concluded it was a Ponzi scheme or that there was something fraudulent going on with the reported returns.
It will be interesting to see if all of the FTX/Alameda fraud (if there was fraud, which seems very likely) took place after the most recent investment round. Investors may have failed not in financial diligence but in ensuring appropriate governance and controls (and, apparently, in assessing the character of FTX’s leadership).
Thank you for sharing this! Do you think your program will work better for people with significant meditation experience? Do you think your own experience was somewhat contingent on the meditation work you did in the Finder’s Course (beyond the discovery that you benefited from loving-kindness meditation, something more along the lines of the benefit from the meditation “reps” you’d been through)?
This is wonderful news!
A couple of comments on the new intro to EA article:
The graph in the “Helping create the field of AI alignment research” is an interesting one, but it takes up a lot of space given that it isn’t about the main point of the section. It seems like the section is about “AI will probably be a big deal and the EA community has helped create and populate the AI alignment field, which is trying to increase the likelihood that AI is beneficial” whereas the graph says “the Industrial Revolution was a big deal” which is somewhat relevant but doesn’t seem to warrant a giant graph in my opinion. Also, some readers might wonder if the graph merely reflects constant exponential growth (my understanding is that it doesn’t, but it’s not obvious to me by looking at it).
Under “Improving decision-making,” I don’t find the Metaculus example very compelling. The text suggests but does not establish that the forecasting community was ahead of consensus public or expert opinions. And it’s not clear to me what people/entities changed, or could have changed, their decisions in a way that would have been beneficial to humanity by using the Metaculus forecast. Maybe that’s obvious to other people though!
That’s a helpful clarification, thank you. I would be concerned, then, that if an organization were motivated to get SoGive’s seal of approval, they could improve their ratio by designating more of their money for specific purposes. Wouldn’t it be pretty easy to write down a four-year (non-binding) plan that would convert much of the current “reserves” to “designated funds”?
Added bonus that it appears to be a double issue
On future funding flows, I specifically said “[i]n the event of a crypto crash, fewer new projects would be funded, and the bar for continuing to fund existing projects would be higher,” so I don’t think we disagree about that. But I disagree with the “lots of good projects (would) have to be ended” statement in your original post.
I’ve listened to SBF on several podcasts, and I haven’t gotten the impression that he thinks all cryptocurrencies are useless. I would recommend this one in particular https://clearerthinkingpodcast.com/episode/038. I’m personally skeptical about the value of cryptocurrencies (relative to their current valuation), and my opinion on some things differs from SBF’s, but I find him to be one of the few people who work in the crypto space that articulate balanced and insightful views on crypto.
Also, SBF did not use the work “Ponzi.” That was Matt Levine’s interpretation. I think what SBF was describing would be better characterized as a speculative bubble, since “Ponzi” implies an intent to defraud. A well intentioned founder might have a crypto-based idea they are excited about. If investors/speculators bid the value of their coin/token to unreasonable values, that doesn’t mean the founder has devised a Ponzi scheme. Note that SBF said “ignore what it does or pretend it does literally nothing” about the “box,” which implies that he thinks most crypto projects are at least trying to do something.
I would respectfully recommend editing your post where it says that SBF admitted cryptocurrencies are a Ponzi scheme. I believe strongly that it is not accurate as stated.
As for current EA spending vs. wealth, I think we are in a situation where, as a rough guess, 40% of EA wealth is in crypto, and current spending is 2-3% of wealth. If the crypto portion were mostly wiped out, current levels could be sustained by donors who are less invested in crypto. In the event of a crypto crash, fewer new projects would be funded, and the bar for continuing to fund existing projects would be higher, but I think non-crypto donors would step up to continue to fund projects that are going reasonably well. In the meantime, there is benefit from funding some new things and learning about what works well. If current spending were 5% of wealth, and if it seemed unlikely that new EA-aligned donors would emerge, I would be more concerned.
I’m also sympathetic to the argument, but I think the BOTEC overstates the potential benefit for another reason. If Givewell finds an opportunity to give $100 million per year at an effectiveness of 15x of cash transfers rather than 5x (and assuming there is a large supply of giving opportunities at 5x), I think the benefit is $200 million per year rather than $1 billion. The $100 million spent on the 15x intervention achieves what they could have achieved by spending $300 million on a 5x intervention. Of course, as noted, that is for only one year, so the number over a longer time horizon would be much larger.
Even with that adjustment, and considering the issues raised by David Manheim and other commenters, I find this post quite compelling – thank you for sharing it.
If we are taking the assumed donor behavior as given, and if the sole objective is maximizing donations to charity, this makes sense. But there is an available option that would be better for both the EA that is earning to give and the charity. The E2Ger could take the $100k job and donate 32%. With even slightly diminishing marginal utility of consumption, the E2Ger would be better off consuming $68k with certainty than having a 80% chance of consuming $45k, a 10% chance of consuming $50k, and a 10% chance of consuming $275k. And the charity would get slightly more in expectation ($32k rather than $31.5k).
In practice, I think there is usually a tradeoff between risk and expected value when choosing among E2G jobs/careers, so choosing riskier options and donating a higher percentage when outcomes are favorable will tend to be the right policy. I’m just not sure that the main argument presented here strengthens the case for doing so.
Thanks for this interesting writeup and discussion!
I think EA movement building attracts people with different levels of commitment to EA. Doing direct work, at least given current salaries, might require most people to forgo 50-90% of their market income. This could mean people doing direct work will have a significant different lifestyle from people in their peer groups, particularly if the people doing direct work have children and do not have partners with high incomes.
People who find EA arguments compelling, but who are not willing to make the lifestyle sacrifice required to do direct work will find earning to give more appealing. Work often does not scale (down) well, so splitting one’s time between working for a market wage and doing direct work will tend not to be optimal.
This would change if direct work paid closer to market wages for different skillsets. More EAs could do direct work, and those who are more committed could donate greater shares of their income. But this could affect the culture within organizations (e.g. if colleagues had very different salaries or donated very different amounts), and lower salaries can serve a selection purpose for roles where the level of commitment to EA might affect job performance.
I’m not suggesting this needs to be in the model, but I think if direct work is an option only for highly committed EAs, it will affect the relative scarcity of labor and capital within the movement.
The returns shown in the document are not indicative of fraud—those sorts of returns are very possible when skilled traders deploy short-term trading strategies in inefficient markets, which crypto markets surely were at the time. The default risk when borrowing at 15% might have been very low, but not zero as they suggested. The “no downside” characterization should have been caught by a lawyer, and was misleading.
Nobody with an understanding of trading would have[EDIT] I would not have concluded they were engaged in Ponzi schemes or were misrepresenting their returns based on the document. There are plenty of sloppy, overoptimistic startup pitch decks out there, but most of the authors of those decks are not future Theranoses.