Pronouns: she/âher or they/âthem.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now Iâm trying to figure out where effective altruism can fit into my life these days and what it means to me.
Yarrow Bouchard đ¸
Wow, youâve read a lot! My intro text to effective altruism (sort of) was Peter Singerâs The Life You Can Save, published in 2009, but itâs probably redundant with a lot of the stuff youâve already read and know.
If youâre interested in reading more about longtermism, the Oxford University Press anthology Essays on Longtermism: Present Action for the Distant Future, published in August, is free to read online, both on the website and as a PDF. Some of the essays changed my mind, some I saw major flaws with, and overall I now have a harsh view of longtermism because scholarship like Essays on Longtermism has failed to turn up much thatâs interesting or important.
An epistemology/âphilosophy of science book I love that isnât directly about EA at all but somehow seems to keep coming up in discussions in and around EA is The Beginning of Infinity by the physicist David Deutsch. Deutschâs TED Talk is a good, quick introduction to the core idea of the book. Deutschâs hour-long interview on TED Interview is a good preview of whatâs in the book and a good introduction to Deutschâs ideas and worldview.
This book is absolutely not part of the EA âcanonâ, nor is it a book that a large percentage of people in EA have read, but I think itâs a book that a large percentage of people in EA should read. Deutschâs ideas about inductivism and AGI are the ones that are most clearly, directly relevant to EA.
I wonât say that I know Deutschâs ideas are correct â I donât â but I really appreciate his pushback against inductivism and against deep learning as a path to AGI, and I appreciate the level of creativity and originality of his ideas.
The big asterisk or question mark I would put over Julia Galefâs work is that she co-founded the Center for Applied Rationality (CFAR). Galef left CFAR in 2016, so she may not be responsible for the bad stuff that happened at CFAR. At least around 2017-2019, the stories about what happened at CFAR are really bad. One of the CFAR co-founders described how CFAR employees would deliberately, consciously behave in deceptive, manipulative ways toward their workshop participants in order to advance CFARâs ideas about existential risk from AI. The most stomach-churning thing of all is that CFAR organized a summer camp for kids where, according to one person who was involved, things were even worse than at CFAR itself. I donât know the specifics of what happened at the summer camp, but I hate the idea that kids may have been harmed in some way by CFARâs work.
Galef may not be responsible at all for any of this, but I think itâs interesting how much of a failure this whole idea of ârationality trainingâ turned out to be, and how unethically and irrationally the people in key roles involved in this project behaved.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/âselling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel theyâre still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesnât identify with effective altruism), but I donât understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
Not the shrimp sauce, surely!
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWellâs recommended charities, or existing charities like the Future of Life Institute? If itâs a low percentage, the conversation seems moot.
The Against Malaria Foundation still has the juice!
My intuition about patient philanthropy is this: if I have $1 million that I can spend philanthropically now or I can invest it for 100 years at a 7% CAGR and grow it to $868 million in 2126, I think spending the $1 million in 2026 will have a bigger, better impact than the $868 million in 2126.
Gross world product per capita (PPP) is around $24,000 now. Itâs forecasted to grow at 2% a year. At 2% a year for 100 years, it will be $174,000 in 2126. So, the world on average will be much wealthier than the wealthiest nations today. The U.S. GDP per capita (PPP) is $90,000, Norwayâs is $107,000 â Iâm ignoring tax havens with distorted stats.
Why should the poor people of today give to the rich people of the future? How is that cost-effective?
The difference between the GiveWell estimate of the cost to save a life and the estimated statistical cost of saving a life in the U.S. is $3,500 vs. $9 million, so a ~2,500x difference. $1 million now could save 285 lives. $868 million in 2126 could save 96 lives â if we think poorer countries will have catch-up growth that brings them up to $90,000+ in GDP per capita (PPP).
The poorest countries may not have catch-up growth, and may not even grow commensurately with the world on average, but in that case, it makes it even more important to spend the $1 million on the poorest countries now to try to make sure that growth happens. Stimulating economic growth in sub-Saharan African countries where growth has been stagnant may be one of the most important global moral priorities. Thinking about 100 years in the future only makes it feel more urgent, if anything.
Plus, the risk that a foundation trying to invest money for 100 years doesnât make it to 2126 seems high.
If you factor in the possibility of transformative technologies like much more advanced AI and robotics, biotech, and so on, and/âor the possibility much faster per capita economic growth over the next 100 years, the case for spending now rather than waiting a century gets even stronger.
Also, looking back @trammellâs takes have aged very well:
It is unlikely we are in the most important time in history
If not, it is good to save money for that time
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
Unless you explicitly warn your donors that youâre going to sit on their money and do nothing with it, you might anger them by employing this strategy, such that they wonât donate to you again. (I donât know if SBF would have noticed or cared because he couldnât even sit through a meeting or an interview without playing a video game, but what applies to SBF doesnât apply to most large donors.)
Also, if there is a most important time in history, and if we can ever know weâre in the most important time in history while weâre in it, it might be 100 years or 1,000 years from now, and obviously holding onto money that long is a silly strategy. (Especially if you think weâre going to start having 10% economic growth within 50 years due to AI, but even if you donât.)
As a donor, I want to donate to charities that can âbeat the marketâ in terms of their impact, i.e., the impact they create by spending the money now is big enough that it is bigger than the effects of investing the money and spending it in 5 years. I would be furious if I found out the charities I donate to were employing the invest-and-wait strategy. I can invest my own money or give it to someone who will spend it.
My thought process is vaguely, hazily something like this:
-
Thereâs a ~50% chance Anthropic will IPO within the next 2 years.
-
Conditional on an Anthropic IPO, thereâs a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
-
Conditional on Anthropic billionaires/âcentimillionaires backing up a truck full of money to meta-EA and EA funds, thereâs a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/âenergy/âattention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I donât mean them literally. This is for illustrative purposes only.
But the overall point is that itâs like the Swiss cheese model of risk where three things have to go âwrongâ for a problem to occur. But in this case, the thing that would go âwrongâ is getting a lot of money, which has happened before with SBFâs chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadnât done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/âCari Tuna or Jann Tallinn), I donât think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesnât seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% â or whatever it is â chance it happens doesnât apply. Itâs a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I donât think they make people less corruptible.
-
Different in what ways? Edit: You kind of answered this in your edit, but what Iâm getting at is: SBFâs giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
Iâm also thinking that Daniela Amodei said this about effective altruism earlier this year:
Iâm not the expert on effective altruism. I donât identify with that terminology. My impression is that itâs a bit of an outdated term.
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
Sheâs gonna give her money to meta-EA?
I guess my next thought is: are we worried about Holden Karnofsky corrupting effective altruism? Because if so, I have bad newsâŚ
Thatâs a really good point!
Longtermism is a spectacular intellectual failure. Itâs been eight years and there are zero good ideas for longtermist interventions other than those that long predate longtermism. What does longtermism recommend we actually do differently? Absolutely nothing.
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? Itâs the second part that Iâm doubting.
I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and thatâs hard to predict. Given Karnofskyâs career history, he doesnât seem like the kind of guy to want to just outsource his familyâs philanthropy to EA funds or something like that.
I find so much EA analysis, in general, to be too clever by half. (Per Wiktionary: âShrewd but flawed by overthinking or excessive complexity, with a resulting tendency to be unreliable or unsuccessful.â) So many conversations like this could be helped along by just having a simpler and more commonsense analysis. Does EA need to have a big conversation right now about how to handle it if EA suddenly gets tons of money? Probably not.
Expecting the money to come in sounds like wishful thinking. Even if there are Anthropic billionaires with liquidity in 2026 or 2027 (which is not guaranteed to happen), even if these billionaires are influenced by EA and want to give money to some of the same charities or cause areas as people in EA cares about, who says the money is going to flow through the EA community/âmovement? If I were an Anthropic billionaire, rather than trying to be Sam Bankman-Fried 2.0 and just spraying a firehose of money at the EA community generally, I would pick the charities I want to donate to and give to them directly.
Besides Sam Bankman-Fried, the other billionaires who have donated to EA-related charities and causes like Dustin Moskovitz/âCari Tuna and Jaan Tallinn have completely managed their own giving. Sam Bankman-Friedâs behaviour in general was impulsive and chaotic â it seems like his financial crimes were less likely to be rational calculation and seem more like poor impulse control or a general disinhibition, as crime often is â and the way he gave money to EA seems like an extension of that. A more careful person probably wouldnât do it that way. They would probably start a private foundation, hire some people to help manage it, and run it quietly out of the public view. Maybe they would take the unusual step of starting something like Open Philanthropy/âCoefficient Giving and do their giving in a more public-facing way. But even so, this is still under their control and not the EA communityâs control.
If some Anthropic billionaire does just back a truck full of money up to the EA community, thatâs a good problem to have, and thatâs the sort of problem you can digest and adapt to as it starts happening. You donât need to invest a lot of your limited resources of time, energy, and attention to it 6 months to 3 years in advance, when itâs not actually clear it will ever happen at all. (This isnât an asteroid, you donât need to fret about long-tail risk.)
Interesting, say more about how you see EA struggling or failing to sit in discomfort?
My much belated reply! On why I think short-form social media like Twitter and TikTok are good money chasing after bad, i.e., the medium is so broken and ill-designed in these cases, I think the best option is to just quit these platforms and focus on long-form stuff like YouTube, podcasts, blogs/ânewsletters (e.g. Medium, Substack), or what-have-you.
The most eloquent critic of Twitter is Ezra Klein. An from a transcript of his podcast, an episode recorded in December 2022:
OK, Elon Musk and Twitter. Elon Musk â let me start with the part of this that I know and get to the part I donât know. Weâre talking in the aftermath of Musk tweeting that my pronouns are prosecute and Fauci. I wrote a piece when Musk announced that he was going to acquire Twitter that was all about the idea that it was going to be a more profound change and upheaval for that service to have it run by someone who liked what was worst about it than people realized. And I think that has proven true.
The thing â and this gets to the question that I have really been â I mean, I stopped tweeting back in April. I think Iâve tweeted a couple of times since then and then once a month or something occasionally. I think my last actual tweet was in maybe October, something like that. So I donât really use Twitter anymore. And thatâs been good for me.
But I go back and forth on this question as people have been looking for an alternative, and there isnât really one. Mastodon, which people talk about, thatâs not a Twitter alternative. Itâs something very different. Should we want that? Should we want a thing that does what Twitter does, but is not Twitter or is not run by Elon Musk or something of that nature? And Iâm not sure that we should.
I think it is worth people really reflecting on this idea that in a matter of roughly two decades, social media has gone from being barely a thing at all to something used by billions and billions of people around the world. I mean, it has become a civilizational fact faster than almost anything in human history. And something operating that macro of a scale should show some civilizational effect.
If it is good, we should be able to say, well, this is what has gotten better. GDP is growing faster because weâre sharing so many more ideas, and so innovation is sped up. Or weâre more humane and gentle and compassionate towards one another because weâre able to see each other across boundary and faction and country and generation. Weâre kinder because weâre sharing so much more. Weâre happier because weâre so much more connected.
Something, something should have gotten better. And I would say â and I think the evidence is very clear â nothing has. You cannot point to one macro indicator that has gotten substantially better, faster, anything, in the time since social media came on the scene. And Iâm not saying that is 100 percent the fault of social media, but I am saying that it implies, at least, there is not some gigantic value here, that before it was offered to us, we were really struggling.
So thatâs one thing. I think itâs really worth asking, why hasnât something been better? And my answer to this, which Iâve kind of played with for a long time and finally wrote up, is that the flip side of all of this information and connection has been distraction and irritation.
That we have more that we can know and more that we can see and more content to consume, but what we donât have is a space for reflection. What we donât have are the habits of mind that tend to help us absorb a difficult question in the best way and come to a good view on it. What we donât have are the sort of temperaments and virtues, something that is helping us have the virtues of how we live small D democratically or civically with one another.
Twitter is one of many things that are not good for that. Itâs not the only one, and I donât think this should only be a conversation about Twitter, though I do think Twitter is unusually central to politics and media and technology. And honestly, if you just want to look at everything I am saying in miniature, look at Elon Musk. You can go back in time and watch his interviews and look at the things heâs done.
And I have not, over time, been a Musk hater. Thereâs many things heâs said that have annoyed me over the years, things heâs done that have annoyed me, ways in which I donât think heâs a great person. But he also did very important things, built very important things, rockets and cars and solar panels and so on. Does he seem to anybody like heâs more focused on the important questions in life and more able to hear things from people he disagrees with and able to absorb in a space of generosity and focus that his attention span is doing really well right now?
I mean, you can watch the effect it has had solely on him. And I think just generalize that out, I think a real tragedy of Twitter is that Musk is a man with many failings and many strengths. And it has amplified his failings and obsessed him with things that it is not good for him to be obsessed with, like the amount of social feedback he gets.
And that is going to completely overwhelm many of his strengths and many of the good effects he could have had on the world, or was even having on the world. This is not going to be good and has not been good for Tesla. And I think Tesla is and was an important company making electric vehicles cooler, more widely acceptable, and hastening a transition to them. But heâs making Tesla poisoned among many of the people who should be most excited to buy a Tesla.
So, all that said, I wonder a lot if it is good to have a Twitter. I think a lot of us are now so used to it that you think, well, what we need is something else like it, but a bit better. Maybe we donât need something like it. Maybe a platform that condenses everybodyâs thoughts down to bumper sticker bluntness is actually just not a good thing. Its structural build is a bad build, that the idea that we should come to expect thoughts to be that short, that we should train our minds for that kind of novelty.
I mean, back in the day, I was always amazed at how easy it was for me to waste time on the internet or on my phone. I would read articles on newspapers and so on. And then as each successive social media network got better, I enjoyed wasting time reading articles. And now I look at single images, and I donât really do TikTok, but in theory, TikToks or tweets or whatever, and that habituation of the brain to novelty and to simplicity, I donât think is a good thing for me, and I donât think a good thing, broadly.
So I would like to imagine things that are valuable in part and are widely used in part, not because they are so good at grabbing our attention from us, but in some way, they feed our attention back to us. They help center us a little bit, that the habits of mind they encourage or habits of mind that we want. And maybe nothing staring at your phone is really like that. Iâm not sure thatâs true, but itâs at least a question I want to play with.
Maybe if youâre going to insist on distracting yourself while youâre standing in line at the grocery by staring at your phone, thereâs no habit of mind that I think is a great one that is going to be encouraged by that. And the fact that I constantly do it is the problem and asking somebody to fix it for me without me changing my fundamental behavior is also the problem. But I think what has happened in social media at this point is a little bit tragic.
And itâs most tragic because so few people seem willing to say that even though I donât like how this looks, my being here is what sustains it in its current form. And I think until people get over the collective action problem, that you have to leave before everybody else has left in order for it to be OK for everybody else to leave. Weâre a little bit stuck, but I donât think weâre going to be stuck forever. Iâm a little surprised by how long we have been stuck for, though.
My life immediately improved after I quit Twitter in early 2021. In retrospect, I see Twitter as a harmful addiction. On the extremely rare few occasions where Iâve dipped into looking at Twitter since then, itâs always made me feel really yucky and frazzled afterward. But I still feel why itâs addictive.
The same overall critique can be applied to TikTok without many modifications. Serious discourse on TikTok suffers in the same ways as on Twitter, for the same reasons.
And any Twitter copycat, such as Bluesky, or TikTok copycat, such as Instagram Reels, has the same problems, since theyâve deliberately copied those platforms as closely as possible, including what makes them bad.
Is AI risk classified as a longtermist cause? If so, why?
It seems like a lot people in EA think that AI risk is a relevant concern within the next 10 years, let alone the next 100 years. My impression is most of the people who think so believe that the near term is enough to justify worrying about AI risk, and that you donât need to invoke people who wonât be born for another 100 years to make the case.
Welcome to the EA Forum!
You know what, I donât mean to discourage you from your project. Go for it.
Can you explain this math for me? The figure you started with is $21 billion in Anthropic equity, so whatâs your figure for Open Philanthropy/âCoefficient Giving? Dustin Moskovitzâs net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so thatâs at least $6 billion. $21 billion is only 3.5x more than $6 billion, not 10x.
If you think of that $21 billion in Anthropic equity, 50% is owned by people who identify with or like effective altruism, thatâs $10.5 billion. If they donate 50% of it to EA-related charities, thatâs around $5 billion. So, even on these optimistic assumptions, that would only be around one Open Philanthropy (now Coefficient Giving), not ten.
What didnât I understand? What did I miss?
(As a side note, the time horizon of 2-6 years is quite long...)