That’s a helpful clarification, thank you. I would be concerned, then, that if an organization were motivated to get SoGive’s seal of approval, they could improve their ratio by designating more of their money for specific purposes. Wouldn’t it be pretty easy to write down a four-year (non-binding) plan that would convert much of the current “reserves” to “designated funds”?
blonergan
I think the optimal level of reserves could vary significantly across organizations. In some cases, having a high level of reserves could make it easier to attract and retain key senior staff members. A 20-something EA might feel comfortable going to work for an org with a short runway, but someone mid-career with a family and who is asked to relocate might feel differently. Institutions and individuals might also be more inclined to collaborate with an organization that appears likely to be around for a while.
Suppose an organization spends 1⁄4 of its reserves every year, and earns a 5% return on those reserves. If I make a $1 donation, the org would increase its spending by $0.25 in year 1. In year 3 it would increase its spending by (0.75)*(1.05)*(0.25) = $0.20. Year 3 it would spend $0.16, Year 4 it would spend $0.12, etc. In the limit the full donation, plus accrued interest gets spent, even if it sits in a bank for a while. The timing would concern me only if I felt that money spent on nuclear security this year would be significantly more valuable than money spent in subsequent years.
I’m a little late to this discussion, but I first want to say thank you for this post! This is a topic I’ve been interested in of late and your post filled in a lot of gaps in my knowledge.
I see the potential for this to be tractable for EA funders and entrepreneurs, because non-EA funders might be incentivized to fund the deployment of interventions that are effective enough. It’s unfortunate that lonely people are less healthy and less productive, in addition to having lower life-satisfaction, but this phenomenon may have a silver lining. There might be interventions that benefit lonely people, and that also provide enough benefit to their employers and/or insurers for those companies to be financially motivated to provide them. An intervention might not clear the EA bar for $/DALY or $/WELLBY, but if it provides a favorable return on investment for employers and insurers, it has the potential to be deployed at scale. An EA funder might still need to fund the development of the intervention, demonstration of its effectiveness, and the spreading of information about its effectiveness. And EA entrepreneurs would need to do the work.
There’s an organization called the Foundation for Social Connection that is funded by the Coalition to End Social Isolation. Two of the four members of the latter’s steering committee are large health insurers, and there are some large corporations among their members. The Foundation for Social Connection has an Innovation Accelerator that could be a good resource for organizations interested in this space. Some of the researchers cited in this post are affiliated with the organization.
In addition to employers and insurers, school districts are another possible funder for effective interventions. I know of a Social-Emotional Learning curriculum that aims to help students make “intentional connections.” Schools could potentially see short-term benefits that justify the cost (primarily the opportunity cost of classroom time) if the intervention improves students’ behavior, engagement, and well-being. Long-term benefits could be significant as well, but these would be more difficult to demonstrate.
One other thought: there might be benefit to thinking more broadly in terms of improving people’s relationships or connectedness, rather than just addressing loneliness. I would suspect that most people would benefit from some combination of (1) increased awareness of the importance of relationships for their physical health and emotional wellbeing; and (2) resources to help them improve their existing relationships and to form new high quality relationships.
Thank you for sharing this! Do you think your program will work better for people with significant meditation experience? Do you think your own experience was somewhat contingent on the meditation work you did in the Finder’s Course (beyond the discovery that you benefited from loving-kindness meditation, something more along the lines of the benefit from the meditation “reps” you’d been through)?
Another EA connection is that Samantha Power, the USAID Administrator who appointed Dean Karlan, is married to Cass Sunstein, who has spoken at EA Global and was once a guest on the 80,000 Hours podcast.
I find it disappointing that he tries to use EA as a shield (p 17, “As a believer in the Effective Altruism movement, my primary goal has never been personal enrichment; I’m motivated by a commitment to help bring happiness and alleviate suffering for others.”) This is in the context of denying that he has billions of dollars stashed away. If he really cared about bringing happiness and alleviating suffering, why would he further tarnish the EA community’s reputation by associating himself with it in testimony before Congress?
I think it depends at least in part on one’s view of the long run value of crypto assets. I’m skeptical that they are worth what they are currently valued at, in aggregate (and am more skeptical that they were worth the prices they were trading at a year ago). So I think it would have been unethical for me personally to be paying for Super Bowl ads encouraging people to get into crypto. But if Tom Brady or whoever genuinely believed that it was in people’s best interest to buy some crypto, I’m not really inclined to judge them for encouraging investment.
But I think there’s a difference between investment and trading. It’s harder to justify encouraging people to day trade, given that day traders lose money in aggregate (mostly via exchange fees). I’d be curious if someone could make a case for encouraging short-term trading.
Thank you for this post. I missed it when it was originally posted, and only came across it via the recent “Friendship Forever” post. An organization doing work in this area that might be of interest is the Foundation for Social Connection. They have an “Innovation Accelerator” that could potentially provide funding for projects addressing loneliness. It looks like they are funded in part by two large health insurance companies (Humana and United Healthcare), based on this.
Thanks for your reply. I do think it would be unusual to see such promises, particularly from a firm looking for large investments. And I would expect to see a bunch of disclaimers, as you suggest. There might have been such language in the actual investment documents, but still. The excerpt shared on Twitter would have set off red flags for me because it seems sloppy and unprofessional, and it would have made me particularly concerned about their risk management, but I wouldn’t have concluded it was a Ponzi scheme or that there was something fraudulent going on with the reported returns.
It will be interesting to see if all of the FTX/Alameda fraud (if there was fraud, which seems very likely) took place after the most recent investment round. Investors may have failed not in financial diligence but in ensuring appropriate governance and controls (and, apparently, in assessing the character of FTX’s leadership).
The returns shown in the document are not indicative of fraud—those sorts of returns are very possible when skilled traders deploy short-term trading strategies in inefficient markets, which crypto markets surely were at the time. The default risk when borrowing at 15% might have been very low, but not zero as they suggested. The “no downside” characterization should have been caught by a lawyer, and was misleading.
Nobody with an understanding of trading would have[EDIT] I would not have concluded they were engaged in Ponzi schemes or were misrepresenting their returns based on the document. There are plenty of sloppy, overoptimistic startup pitch decks out there, but most of the authors of those decks are not future Theranoses.
Thank you for your response. And I apologize for being defensive in my comment. And for not noticing your edits when they happened.
I took your post seriously and had an extended exchange with you in the comments section. I indicated that I shared some of your concerns. I also expressed that I thought you had mischaracterized some of SBF’s views about bitcoin and other cryptocurrencies. It appears that you have since edited the post to correct some of those mischaracterizations, but you did not acknowledge having done so, best I can tell.
I also disagreed with your view that many good projects would lose funding if there were a crypto downturn. Unfortunately, with FTX collapsing so abruptly, there is a risk of that happening. I am hopeful that other donors will step up to fund the highest value projects funded by FTX, but this is a real challenge we face as a community.
I’m puzzled by your statement in this new post that “It was quite obvious that this would happen...” There was certainly a risk things could go badly, and I think I personally underestimated the risk, but I don’t think it is credible to say that it was obvious.
I interpreted that as meaning that a $1,000 cash transfer costs a bit more than $1,000, including the direct cost of the cash transfer itself. So, something like $100 of delivery costs would mean that a $1,000 cash transfer would have a total cost of around $1,100.
Here HLI comes up with $1,170 as the total cost of a $1,000 cash transfer, which seems reasonably close to your numbers.
Here are a couple of other links that come to mind:
https://arxiv.org/abs/2008.02275
https://www.brookings.edu/research/aligned-with-whom-direct-and-social-goals-for-ai-systems/
Added bonus that it appears to be a double issue
This is wonderful news!
A couple of comments on the new intro to EA article:
The graph in the “Helping create the field of AI alignment research” is an interesting one, but it takes up a lot of space given that it isn’t about the main point of the section. It seems like the section is about “AI will probably be a big deal and the EA community has helped create and populate the AI alignment field, which is trying to increase the likelihood that AI is beneficial” whereas the graph says “the Industrial Revolution was a big deal” which is somewhat relevant but doesn’t seem to warrant a giant graph in my opinion. Also, some readers might wonder if the graph merely reflects constant exponential growth (my understanding is that it doesn’t, but it’s not obvious to me by looking at it).
Under “Improving decision-making,” I don’t find the Metaculus example very compelling. The text suggests but does not establish that the forecasting community was ahead of consensus public or expert opinions. And it’s not clear to me what people/entities changed, or could have changed, their decisions in a way that would have been beneficial to humanity by using the Metaculus forecast. Maybe that’s obvious to other people though!
On future funding flows, I specifically said “[i]n the event of a crypto crash, fewer new projects would be funded, and the bar for continuing to fund existing projects would be higher,” so I don’t think we disagree about that. But I disagree with the “lots of good projects (would) have to be ended” statement in your original post.
I’ve listened to SBF on several podcasts, and I haven’t gotten the impression that he thinks all cryptocurrencies are useless. I would recommend this one in particular https://clearerthinkingpodcast.com/episode/038. I’m personally skeptical about the value of cryptocurrencies (relative to their current valuation), and my opinion on some things differs from SBF’s, but I find him to be one of the few people who work in the crypto space that articulate balanced and insightful views on crypto.
Also, SBF did not use the work “Ponzi.” That was Matt Levine’s interpretation. I think what SBF was describing would be better characterized as a speculative bubble, since “Ponzi” implies an intent to defraud. A well intentioned founder might have a crypto-based idea they are excited about. If investors/speculators bid the value of their coin/token to unreasonable values, that doesn’t mean the founder has devised a Ponzi scheme. Note that SBF said “ignore what it does or pretend it does literally nothing” about the “box,” which implies that he thinks most crypto projects are at least trying to do something.
I would respectfully recommend editing your post where it says that SBF admitted cryptocurrencies are a Ponzi scheme. I believe strongly that it is not accurate as stated.
As for current EA spending vs. wealth, I think we are in a situation where, as a rough guess, 40% of EA wealth is in crypto, and current spending is 2-3% of wealth. If the crypto portion were mostly wiped out, current levels could be sustained by donors who are less invested in crypto. In the event of a crypto crash, fewer new projects would be funded, and the bar for continuing to fund existing projects would be higher, but I think non-crypto donors would step up to continue to fund projects that are going reasonably well. In the meantime, there is benefit from funding some new things and learning about what works well. If current spending were 5% of wealth, and if it seemed unlikely that new EA-aligned donors would emerge, I would be more concerned.
I would be interested to see results from a similar experiment where the groups were given access to the “Bad Llama” model, or given the opportunity to create their own version by re-tuning Llama 2 or another open source model. I don’t have a strong prior as to whether such a model would help the groups to develop more dangerous plans.