Pronouns: she/âher or they/âthem.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now Iâm trying to figure out where effective altruism can fit into my life these days and what it means to me.
Yarrow Bouchard đ¸
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so Iâm wondering if itâs a different model or maybe just good prompting or something else.
@Toby Tremlettđš, are you SummaryBotâs keeper? Or did you just manage its evil twin?
That sounds right to me! Good job, SummaryBot!
I just posted an answer. I hope you find it helpful!
Hi Andreu. The EA Forum definitely has a lot of stuff about AI because thatâs the hot topic to talk about, and it sure seems like a lot of people in the movement these days are focused on AI, but according to a survey in 2024, the top priority cause area for 29% of people in EA is global poverty and global health and the top priority for 31% of people is AI risk, so AI risk and global poverty/âhealth are about tied â at least on that metric. (Another way of averaging the data from the same survey puts global poverty/âhealth slightly ahead of AI risk.)
The last survey to ask where people in EA were donating is from way back in 2020. A whole lot has changed since 2020. For what itâs worth, 62% of respondents to that survey said they were donating to global health and development charities, 27% said animal welfare, and 18% said AI and âlong termâ.
The 2020 survey also found 16% of people named global poverty as their top cause, while 14% said AI risks. Itâs interesting that this is true given where people said they donated in 2020. I would guess thatâs probably because, regardless of which cause area you think is more important, itâs not clear where you would donate if you wanted to reduce AI risk, whereas with global poverty there are many great options, including GiveWellâs top charities. So, maybe even now, more people are donating to charities related to global poverty than to AI risk, but I donât know about any actual data on that.
By the way, if you click âCustomize feedâ on the EA Forum homepage, you can reduce or fully hide posts about any particular topic. So, you could see fewer posts on AI or just hide them altogether, if you want.
Also, if you want to read posts expressing skepticism about AI risk, the forum has an âAI risk skepticismâ tag that makes it easy to find posts about that. You have different options for sorting these posts that will show you different stuff. âTopâ (the default) will mostly show you posts from years ago. âNew & upvotedâ will mostly show you posts from within the last year (including some of mine!).
I mean I agree that independent scrutiny is good, that itâs great if someone volunteers to do that, and it would be cool if someone could be paid to do that, but itâs way understating it to say the issue with Vetted Causes was an insufficiently âprofessional toneâ or that its work was not âup to the standards of paid full-time professionalsâ. In my view, Vetted Causes did at least one thing that in a professional context would probably be considered an ethics violation, and might even open up an organization to legal liability.
Specifically, Vetted Causes accused a charity of fraud when that wasnât at all true, and they didnât retract the accusation after people pointed out it wasnât true. Thatâs obviously unethical, and a lawsuit definitely wouldnât be worthwhile, nor would it set a good precent for the EA community, but itâs the sort of thing you could sue someone for. It goes beyond just criticism, itâs saying something false â something that Vetted Causes should have known better than to believe â in a way that would have been really damaging if people believed the falsehood.
âThou shalt not bear false witness against thy neighborâ.
PruÂdenÂtial longterÂmism is deÂfanged by the stratÂegy of proÂcrasÂtiÂnaÂtion â and thatâs not all
Thanks for sharing the papers. Some of those look really interesting. Iâll try to remember to look at these again when I think of it and have time to absorb them.
What do you think of the Arch Mission Foundationâs Nanofiche archive on the Moon?Wouldnât a global totalitarian government â or a global government of any kind â require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now?
I have personally never bought the idea of âvalue lock-inâ for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. For instance, the concept of âvalue lock-inâ wouldnât apply to AGI created through human brain emulation. And for other technological paradigms that could underlie AGI, are they like human brain emulation in this respect or unlike it? But this is starting to get off-topic for this post.
I guess you can put a lot of meaning into a little symbol. I wouldnât interpret a cross or an astrology sign as conveying a sense of superiority, necessarily, I would just think that person is really into being Christian or really into astrology.
If you see someone wearing a red ribbon relating to HIV/âAIDS, I guess you could have the Curb Your Enthusiasm reaction of: âWow, so theyâre trying to act like theyâre so much better than me because they care so much about AIDS? What a jerk!â Or you could just think, âOh, I guess they care about AIDS for some reason.âIâve never perceived anyone to be using the little blue and orange diamond icons to signal superiority. I interpret it as something more supportive and positive. Itâs reassuring to see other people do something altruistic so you donât feel crazy for doing it, and making a sacrifice feels more bearable when you see other people doing it too. (Imagine how different it would feel if when you donated blood, you did it completely alone in an empty room vs. seeing lots of other people around who are giving blood at the same time too.)
Iâve never observed anyone trying to police someone over donating 10% of their income, or trying to pressure them to take the pledge, or judging them for not taking it. For all I know, that has happened to somebody somewhere, Iâve just never seen it, personally.
I would say donât worry too much about the 10% income pledge and just focus on whatever amount of donating or way of donating makes sense for you personally.
I would be concerned about people deciding to delay their donating by 40-50 years (or whatever it is), since there are probably huge opportunity costs. I hope that in 40-50 years all the most effective charities are way less cost-effective than the most effective charities today because we will have made so much progress on global poverty, infectious diseases, and other problems. I hope malaria and tuberculosis arenât ongoing concerns in 40-50 years, meaning the Against Malaria Foundation wouldnât even exist anymore â mission accomplished! But you said youâre already donating about 1% of your income every year, so youâre not holding off completely on donating.
Hi Zoe. Iâm glad youâve crossed over from lurking to participating. I gave this post an upvote even though I disagree with a lot of it, even though I wanted to agree. I agree with this part:
the EA community does (subconsciously) enforce quite a bit of uniformity in thoughts and actions â everyone generally agrees on the most important causes and the most effective ways to contribute to these causes
The conformity is way too high, and the level of internal agreement is way too high/âlack of internal disagreement is way too low.
When I was involved in organizing my university EA group, one conversation we had was about the value of art. Someone in our group talked about a novel she had found important and impactful. Can we really say that anti-malarial bednets are more important than art? I think a lot of people in EA feel (and, indeed, in our EA group at the time felt) a temptation to argue back against this point. But thereâs a more intriguing and more expansive conversation to be had if you donât argue back, take a breath, and really consider her point. (For example, have you considered the impact sci-fi has had on real life science and technology? Have the considered the role fiction plays in teaching us moral lessons? Or in understanding emotions and relationships, which are what life is all about?)
I think, in general, itâs way more interesting to have a mix of people with diverse personalities, interests, and points of view, even when that means sometimes entertaining some off-the-wall ideas. (I donât think what that person said about art was off-the-wall at all, but talk to enough random people about EA online or in real life and youâll eventually hear something unexpected.)
This is the part of your post I have the hardest time with:I wonder if the orange or blue diamonds are sending the right signals (do we have data on how people hear about the pledge vs. their chance of taking it?). The little icon next to user names in social media is giving âcultâ vibes again (think a cross or an astrological sign next to someoneâs user name).
Is the little orange or blue diamond so different from someone having an emoji in their username, or, in real life, wearing a little pink or red ribbon for breast cancer of HIV/âAIDS awareness? I have a hard time relating to your perspective because if on Twitter or wherever I saw someone put a cross or an astrological sign next to their name, I think I would just assume they are religious or really into astrology. I wouldnât find it particularly scary or cult-y.
Personally I wish the EA Forum had more ways to zhuzh up how your username appears on posts and comments. The little diamonds are the only bit of colour we get around here.
Full-on profile pictures embedded in posts and comments might be too distracting, but I donât know⌠coloured usernames? Little badges to represent things like your country, your favourite cause area, or your identity (e.g. LGBT)? I find one advantage of having something like this is not just the zhuzh but also it makes it easier to remember whoâs who rather than having to memorize everyoneâs names. The little blue and orange diamonds already help a bit with this.
[Edit: I decided to zhuzh up my username with emojis because it looks ridiculous but also kinda cute and it really made me laugh. Lol.]having my name on a public list and being asked to report my donations all the time for the rest of my life would definitely overwhelm me to the point of deterrence
Is this really what Giving What We Can asks you to do these days? I took the 10% pledge back in 2008 or 2009. I have no idea if my name is still on a public list and I donât think I have ever once reported my donations. I can empathize with hating the administration burden part of it because I really struggle with admin tasks of all kinds (I think a lot of people do) and I find a lot of admin stuff miserable and demoralizing.
I guess the point of reporting your donations is so that GWWC can say how much money people are donating as part of this movement, but obviously thatâs of secondary importance (a very, very distant second) to actually donating the money. I always saw the 10% pledge as a personal, spiritual commitment and not a promise I made to anyone else. Nor as something I was obligated to report. Itâs a reminder to myself of what my values are: âhey, remember you said you were going to do this??â
So, if you feel you want to do the pledge but donât want to do the admin, just do the pledge and donât do the admin. :)In fact, wouldnât it be much easier in general for people to conceptualize and pledge a certain % of their total assets to EA causes upon passing instead of doing it every year?
Would it be? Youâd be asking people to think about dying, which isnât easy. Also, youâd be asking them to write a will, which is a lot of admin!
Also, if the average person who is interested in EA is 38 years old â which is Will MacAskillâs age â and their average life expectancy is 80, doesnât that mean no one would donate anything to charity for, on average, the next 42 years? And wouldnât that be really bad?
I think your idea of donating a percentage of your passive income from capital gains to charity after you retire early is perfectly fine â thatâs just donating a percentage of your income, which is the whole idea in the first place. Maybe youâll want to donate less than 10% and thatâs fine too.
I think everyone should find what works for their particular situation. The 10% pledge is formulated to be something that could apply to the majority of the population in high-income countries, but not something that necessarily makes the most sense for everyone in those countries.âSound like AIâ⌠When I talk to my EA friends, they donât sound like AI-generated academic papers...
âSounds like AIâ is the wrong way to put this. Posts on the EA Forum donât sound like AI. They have a distinct voice that is different from ChatGPT, Claude, or Gemini. LLMs have a distinctive bland, annoying, breathless, insubstantial, and absolutely humourless style. The only thing really similar to the EA Forum style and LLM style is the formal tone. Maybe EA Forum posts sound like academic papers, but they donât sound like AI-generated academic papers.
I know because Iâve read a lot of stuff on the EA Forum and a lot of stuff written by AI. I can really tell the difference.EA is also associated with obscure (to gen pop) concepts like longtermism, accelerationism, micromorts etc. ⌠When I talk to my EA friends⌠our colloquial/â less researched exchanges can feel more convincing than reading way too many stats and big words.
This is more accurate. EA/âthe EA Forum has its own weird subculture and sublanguage and itâs pretty annoying. People use lingo and jargon that isnât useful or clear, and sometimes has never even been defined â I hate the term âtruthseekingâ for this reason, what does it mean? (As far as I know, itâs literally never been defined, anywhere, by anyone. And itâs ambiguous. So, why is that term helpful or necessary?) People assume too much background knowledge and donât explain things in an accessible way, which wouldnât just help newcomers, but would also help everyone.
What you said about casual, informal conversations with your EA friends being more persuasive is an argument in favour of people in EA having more casual, informal conversations on the EA Forum, or on podcasts, or whatever. Before I read your post, I already had the intuition that this would be a good idea.I want to suggest to everyone the concept of doing public dialogues on the EA Forum, following the model of the Slack chats that FiveThirtyEight used to do on their blog. The FiveThirtyEight staff would pick a topic, chat about it on Slack, and then do some light editing (e.g. to add links/âcitations). Then theyâd publish that on their blog. I think this could work really well for the EA Forum. You could either do the chat in real time (synchronously) or take time doing it (asynchronously). But I think it would be more fun if people didnât spend too much time writing each message, and if they tried to be more casual and informal and conversational than EA Forum posts typically are. I just have a hunch that this would be a good format. (And anyone can message me if they want to try this with me.)
In terms of length, personally, Iâm not as concerned with how long something is as I am with its economy of words. I donât like when things are long and theyâre longer than they could have been. If somethingâs long but itâs still as short as it could have been, thatâs great. (Thatâs why books exist!!) If somethingâs long and I feel like it could have been 20% of its length, thatâs a huge drag. If somethingâs short but it makes a complete point and says everything it really needs to say, thatâs like a delightful piece of candy. I love reading stuff like that. But not everything can be candy. (And if we feel like it should be, maybe we can blame Twitter for conditioning us to want everything to be said in 140-280 characters.)
What makes something feel longer or shorter is also how enjoyable it is to read, so itâs also a matter of craft and style.
I think where academic publishing would be most beneficial for increasing the rigour of EAâs thinking would be AGI. Thatâs the area where Tyler Cowen said people should âpublish, publish, publishâ, if Iâm correctly remembering whatever interview or podcast he said that on.
I think academic publishing has been great for the quality of EAâs thinking about existential risk in general. If I imagine a counterfactual scenario where that scholarship never happened and everything was just published on forums and blogs, it seems like it would be much worse by comparison.
Part of what is important about academic publishing is exposure to diverse viewpoints in a setting where the standards for rigour are high. If some effective altruists started a Journal of Effective Altruism and only accepted papers from people with some prior affiliation with the community, then that would probably just be an echo chamber, which would be kind of pointless.
I liked the Essays on Longtermism anthology because it included critics of longtermism as well as proponents. I think thatâs an example of academic publishing successfully increasing the quality of discourse on a topic.
When it comes to AGI, I think it would be helpful to see some response to the ideas about AGI you tend to see in EA from AI researchers, cognitive scientists, and philosophers who are not already affiliated with EA or sympathetic to its views on AGI. There is widespread disagreement with EAâs views on AGI from AI researchers, for example. It could be useful to read detailed explanations of why they disagree.
Part of why academic publishing could be helpful here is that itâs a commitment to serious engagement with experts who disagree in a long-form format where youâre held to a high standard, rather than ignoring these disagreements or dismissing them with a meme or with handwavy reasoning or an appeal to the EA communityâs opinion â which is what tends to happen on forums and blogs.
EA really exists in a strange bubble on this topic, its epistemic practices are unacceptably bad, scandalously bad â if itâs a letter grade, itâs an F in bright red ink â and people in EA could really improve their reasoning in this area by engaging with experts who disagree without the intent to dismiss or humiliate them, but to actually try to understand why they think what they do and seriously consider if theyâre right. (Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/âalignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so whatâs going on here?)Only weird masochists who dubiously prioritize their time will come onto to forums and blogs to argue with people in EA about AGI. The only real place where different ideas clash online â Twitter â is completely useless for serious discourse, and, in fact, much worse than useless, since it always seems to end up causing polarization, people digging in on opinions, crude oversimplification, and in-group/âout-group thinking. Humiliation contests and personal insults are the norm on Twitter, which means people are forming their opinions not based on considering the reasons for holding those opinions, but based on needing to âwinâ. Obviously thatâs not how good thinking gets done.
Academic publishing â or, failing that, something that tries to approximate it in terms of the long-form format, the formality, the high standards for quality and rigour, the qualifications required to participate, and the norms of civility and respect â seems the best path forward to get that F up to a passing grade.
M-Discs are certainly interesting. Whatâs complicated is that the company that invented M-Discs, Millenniata, went bankrupt, and that has sort of introduced a cloud of uncertainty over the technology.
There is a manufacturer, Verbatim, with the license to manufacture discs using the M-Disc standard and the M-Disc branding. Some customers have accused Verbatim of selling regular discs with the M-Disc branding at a huge markup and this accusation could be completely wrong and baseless â Verbatim has denied it â but itâs sort of hard to verify whatâs going on anymore.
If Millenniata were still around, they would be able to tell us for sure whether Verbatim is still complying properly with the M-Disc standard and whether we can rely on their discs. I donât understand the nuances of optical disc storage well enough to really know whatâs going on. I would love to see some independent third-party who has expertise in this area and who is reputable and trustworthy tell us whether the accusations against Verbatim are really just a big misunderstanding.
Millenniataâs bankruptcy is an example of the unfortunate economics of archival storage media. Rather than pay more for special long-lasting media, itâs far more cost-effective to use regular, short-term storage media â today, almost entirely hard drives â and periodically copy over the data to new media. This means the market for archival media is small.
As for how many physical locations digital data is kept in, that depends on what it is. The CLOCKSS academic archive keeps digital copies of 61.4 million academic papers and 550,000 books in 12 distinct physical locations. I donât know how Wikipedia does its backups, mirroring, or archiving internally, but every month an updated copy of the English Wikipedia is released that anyone can download. Given Wikipediaâs openness, it is unusually well-replicated across physical locations, just considering the number of people who download copies.
I also donât know how the EA Forum manages its backups or archiving internally, but a copy of posts can be saved using the Wayback Machine, which will create at least 2 additional physical copies on the Internet Archiveâs servers. I donât know what Google does with YouTube videos. I think for Google Drive data they keep enough data to recover files in at least two physically separate datacentres, but those could be two datacentres in the same region. I also donât know if they do the same for YouTube data â I hope so.
I think in the event of a global catastrophe like a nuclear war, what we should think about is not whether the data would physically survive somewhere on a hard drive, but, more practically, whether it would ever actually be recovered. If society is in ruins, then it doesnât really matter if the data physically survives somewhere unless it can be accessed and continually copied over so that itâs preserved. Since hard drives last for such a short time, the window of time for society to recover enough to find, access, and copy the data from hard drives is quite narrow.
I donât know if you were asking about paper books or ebooks, but for paper books, it seems clear that for any book on the New York Times bestseller list, there must be at least one copy of that book in many different libraries, bookstores, and homes in many locations. I donât know how to think about the probability of copies ending up in Argentina, Iceland, or New Zealand, but it seems like at least a lot of English bestsellers must end up in various libraries, stores, and homes in New Zealand.
Paper books printed on acid-free paper with a 2% alkaline reserve, which, as far as I understand, is the standard for paper books printed over the last 20 years or so, are expected to last over 100 years provided they are kept in reasonably cool, dry, and dark conditions. Iâm not sure how exactly the longevity would be estimated to change for books kept in a tropical climate vs. a temperate one. The 2% alkaline reserve on the paper is so that as the natural acid in the cellulose in the paper is slowly released over time, the alkaline counteracts it and keeps the paper neutral. Paper is really such a fascinating technology and more miraculous than we give it credit for.
Vinyl records are more important for preserving culture â specifically music â rather than knowledge or information, but itâs interesting that vinyl sales are so high and that vinyl would probably end up being the most important technology for the preservation of music in some sort of global disaster scenario. In 2024, the top ten bestselling albums on vinyl in the U.S. sold between 175,000 copies (for Olivia Rodrigo at #10) and 1,489,000 copies (for Taylor Swift at #1). The principle here is the same as for paper books. You have to imagine these records are spread out all over the United States. Given that both vinyl records and many of the same musicians are popular in other countries like Canada, the UK, Australia, and New Zealand, it seems likely there are many copies elsewhere in the world too.
Since looking into this topic, I have warmed considerably on vinyl. I didnât really get the vinyl trend before. I guess I still donât, really, but now I think vinyl is a wonderful thing, even if the reasons people are buying it are not that it makes the preservation of music more resilient to a global disaster.
I didnât need any convincing to be fond of paper books, but paper just seems more and more impressive the more I think about it.
Quite interesting!
Iâm not sure Iâm able to follow anything youâre trying to say. I find your comments quite confusing.
I donât agree with your opinion that academia is nothing but careerism and, presumably, that effective altruism is something more than that. I would say effective altruism and academia are roughly equally careerist and roughly equally idealistic. I also donât agree that effective altruism is more epistemically virtuous than academia, or more capable of promoting social change, or anything like that.
Thank you for your kindness. I appreciate it. :)
Do the two papers you mentioned give specific quantitative information about how much LLM performance increases as the compute used for RL scales? And is it a substantially more efficient scaling than what Toby Ord assumes in the post above?
In terms of AI safety research, this is getting into a very broad, abstract, general, philosophical point, but, personally, Iâm fairly skeptical of the idea that anybody today will be able to do AI safety research now that can be applied to much more powerful, much more general AI systems in the future. I guess if you think the more powerful, more general AI systems of the future will just be bigger versions of the type of systems we have today, then it makes sense why youâd think AI safety research would be useful now. But I think there are good reasons for doubting that, and LLM scaling running out of steam is just one of those good reasons.
To take a historical example, the Machine Intelligence Research Institute (MIRI) had some very specific ideas about AI safety and alignment dating back to before the deep learning revolution that started around 2012. I recall having an exchange with Eliezer Yudkowsky, who co-founded MIRI and does research there, on Facebook sometime around 2015-2017 where he expressed doubt that deep learning was the way to get to AGI and said his best bet was that symbolic AI was the most promising approach. At some point, he must have changed his mind, but I canât find any writing heâs done or any talk or interview where he explains when and why his thinking changed.
In any case, one criticism â which I agree with â that has been made of Yudkowskyâs and MIRIâs current ideas about AI safety and alignment is that these ideas have not been updated in the last 13 years, and remain the same ideas that Yudkowsky and MIRI were advocating before the deep learning revolution. And there are strong reasons to doubt they still apply to frontier AI systems, if they ever did. What we would expect from Yudkowsky and MIRI at this point is either an updating of their ideas about safety and alignment, or an explanation of why their ideas developed with symbolic AI in mind should still apply, without modification, to deep learning-based systems. Itâs hard to understand why this point hasnât been addressed, particularly since people have been bringing it up for years. It comes across, in the words of one critic, as a sign of thinkers who are âpersistently unable to update their priors.â
What I just said about MIRIâs views on AI safety and alignment could be applied to AI safety more generally. Ideas developed on the assumption that current techniques, architectures, designs, or paradigms will scale all the way to AGI could turn out to be completely useless and irrelevant if it turns out that more powerful and more general AI systems will be built using entirely novel ideas that we canât anticipate yet. You used an aviation analogy. Let me try my own. Research on AI safety that assumes LLMs will scale to AGI and is therefore based on studying the properties peculiar to LLMs might turn out to be a waste of time if technology goes in another direction, just as aviation safety research that assumed airships would be the technology that will underlie air travel and focused on the properties of hydrogen and helium gas has no relevance to a world where air travel is powered by airplanes that are heavier than air.
Itâs relevant to bring up at this point that a survey of AI experts found that 76% of them think that itâs unlikely or very unlikely that current AI techniques, such as LLMs, will scale to AGI. There are many reasons to agree with the majority of experts on this question, some of which I briefly listed in a post here.
Because I donât see scaling up LLMs as a viable path to AGI, I personally donât see much value in AI safety research that assumes that it is a viable path. (To be clear, AI safety research that is about things like how LLM-based chatbots can safely respond to users who express suicidal ideation, and not be prompted into saying something harmful or dangerous, could potentially be very valuable, but thatâs about present-day use cases of LLMs and not about AGI or global catastrophic risk, which is what weâve been talking about.) In general, Iâm very sympathetic to a precautionary, âbetter safe than sorryâ approach, but, to me, AI safety or alignment research canât even be justified on those grounds. The chance of LLMs scaling up to AGI seems so remote.
Itâs also unlike the remote chance of an asteroid strike, where we have hard science that can be used to calculate that probability rigorously. Itâs more like the remote chance that the Large Hadron Collider (LHC) would create a black hole, which can only be assigned a probability above zero because of fundamental epistemic uncertainty, i.e., based on the chance that weâve gotten the laws of physics wrong. I donât know if I can quite put my finger on why I donât like a form of argument in favour of practical measures to mitigate existential risk based on fundamental epistemic uncertainty. I can point out that it would seem to lead to have some very bizarre implications.
For example, what probability do we assign to the possibility that Christian fundamentalism is correct? If we assign a probability above zero, then this leads us literally to Pascalâs wager, because the utility of heaven is infinite, the disutility of hell is infinite, and the cost of complying with the Christian fundamentalist requirements for going to heaven are not only finite but relatively modest. Reductio ad absurdum?
By contrast, we know for sure dangerous asteroids are out there, we know theyâve hit Earth before, and we have rigorous techniques for observing them, tracking them, and predicting their trajectories. When NASA says thereâs a 1 in 10,000 chance of an asteroid hitting Earth, thatâs an entirely different kind of a probability than if a Bayesian-utilitarian guesses thereâs a 1 in 10,000 chance that Christian fundamentalism is correct, that the LHC will create a black hole, or that LLMs will scale to AGI within two decades.
One way I can try to articulate my dissatisfaction with the argument that we should do AI safety research anyway, just in case, is to point out thereâs no self-evident or completely neutral or agnostic perspective from which to work on AGI safety. For example, what if the first AGIs we build would otherwise have been safe, aligned, and friendly, but by applying our alignment techniques developed from AI safety research, we actually make them incredibly dangerous and cause a global catastrophe? How do we know which kind of action is actually precautionary?
I could also make the point that, in some very real and practical sense, all AI research is a tradeoff between other kinds of AI research that could have been done instead. So, maybe instead of focusing on LLMs, itâs wiser to focus on alternative ideas like energy-based models, program synthesis, neuromorphic AI, or fundamental RL research. I think the approach of trying to squeeze Bayesian blood from a stone of uncertainty by making subjective guesses of probabilities can only take you so far, and pretty quickly the limitations become apparent.
To fully make myself clear and put my cards completely on the table, I donât find effective altruismâs treatment of the topic of near-term AGI to be particularly intellectually rigorous or persuasive, and I suspect at least some people in EA who currently think very near-term AGI is very likely will experience a wave of doubt when the AI investment bubble pops sometime within the next few years. There is no external event, no evidence, and no argument that can compel someone to update their views if theyâre inclined enough to resist updating, but I suspect there are some people in EA who will interpret the AI bubble popping as new information and will take it as an opportunity to think carefully about their views on near-term AGI.
But if you think that very near-term AGI is very likely, and if you think LLMs very likely will scale to AGI, then this implies an entirely different idea about what should be done, practically, in the area of AI safety research today, and if youâre sticking to those assumptions, then Iâm the wrong person to ask about what should be done.
This is a very strange critique. The claim that research takes hard work does not logically imply a claim that hard work is all you need for research. In other words, to say hard work is necessary for research (or for good research) not does imply it is sufficient. I certainly would never say that it is sufficient, although it is necessary.
Indeed, I explicitly discuss other considerations in this post, such as the ârigour and scrutinyâ of the academic process and what I see as âthe basics of good epistemic practiceâ, e.g. open-minded discussion with people who disagree with you. I talk about specific problems I see in academic philosophy research that have nothing to do with whether people are working hard enough or not. I also discuss how, from my point of view, ego concerns can get in the way, and love for research itself â and maybe I should have added curiosity â seems to be behind most great research. But, in any case, this post is not intended to give an exhaustive, rigorous account of what constitutes good research.
If picking examples of academic philosophers who did bad research or came to bad conclusions is intended to discredit the whole academic enterprise, I discussed that form of argument at length in the post and gave my response to it. (Incidentally, some members of the Bay Area rationalist community might see Heideggerâs participation in the Nazi Party and his involvement in book burnings as evidence that he was a good decoupler, although I would disagree with that as strongly as I could ever disagree about anything.)
I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.
I think effective altruism is as much attached to intellectual tradition and as much constrained by political considerations as pretty much anything else. No one can transcend the world with an act of will. We are all a part of history and culture.
I think you should practice turning your loose collections of thoughts into more of a standard essay format. That is an important skill. You should try to develop that skill. (If you donât know how to do that, try looking for online writing courses or MOOCs. There are probably some free ones out there.)
One problem with using an LLM to do this for you is that itâs easy to detect, and many people find that distasteful. Whether itâs fully or partially generated by an LLM, people donât want to read it.
Another problem with using an LLM is youâre not really thinking or communicating. The act of writing is not something that should be automated. If you think it should be automated, then donât post on the EA Forum and wait for humans to respond to you, just paste your post into ChatGPT and get its opinion. (If you donât want to do that, then you also understand why people donât want you to post LLM-generated stuff on here, either.)
DisÂciÂplined iconoclasm
Iâm sorry to say this post is very difficult to follow. The discussion of the confidential information that Oliver Habryka allegedly shared is too vague to understand. I assume you are trying to be vague because you donât want to disclose confidential information. That makes sense. But then this makes it impossible to understand the situation.
I wouldnât donate to Lightcone Infrastructure and Iâd recommend against it, but for different reasons than the ones stated in this post.
No, irreducible uncertainty is not all-or-nothing. Obviously a person should do introspection and analysis when making important decisions.
For what itâs worth, Peter Singerâs organization The Life You Can Save has a donation pledge that adjusts the percentage based on your income. You can type in your income and will give you a percentage back. At $10,000, itâs 0%. At $50,000, itâs 1%. At $100,000, itâs 1.8%. At $500,000, itâs 10%. And at $1,000,000, itâs 15%.
So, this pledge is less demanding than the Giving What We Can pledge and, also, nobody is saying you have to take either pledge to be a part of EA.
Most people on the EA Forum donât seem to have the little blue or orange diamonds next to their usernames. Probably at least a few just havenât added a diamond even though they havenât taken the Giving What We Can pledge, but as far as I know, a lot of people genuinely havenât taken it. Maybe even the majority, who knows. When I ran an EA group at my university, I think at least about half of the regular, active members didnât take the GWWC pledge, and Iâd guess it was probably more than half. (It was a long time ago, and itâs not something we kept track of.)
In my personal experience with EA, Iâve never seen or heard anyone say anything like, âYou should/âneed to take the pledge!â or âWhy havenât you taken the pledge yet?â Iâve never seen anyone try to give someone the hard sell for the GWWC pledge or, for that matter, even try to convince them to take it at all.
Personally, Iâm very much a proponent of not telling people what to do, and not trying to pressure people into doing anything. My approach has always been to respect peopleâs autonomy and simply talk about why I donate, or why I think donating in general is a good idea, to the extent theyâre curious and want to know more about those things.
I think where Matthewâs comments resonate is just that itâs hard to understand how your math checks out. For example, the average lifetime earnings of Americans with a graduate degree (which is significantly higher than all for other educational cohorts, including those with only bachelorâs degrees) from age 20 to 69 is $3.05 million, adjusted for inflation from 2015, when this data was collected, to 2025. If youâre earning around $1 million a year, then within about 3 years at that income level, your lifetime earnings will match the average lifetime earnings of Americans with a graduate degree. Itâs hard to square the idea that you only want to live a frugal lifestyle, comparable to someone around the U.S. poverty line, or even the lifestyle equivalent to someone with U.S. median income with the idea that you earn around $1 million a year and that donating 10% of your income is too demanding, even accounting for the fact that you want to retire extremely early.
And retiring before age 30 is itself a sort of luxury good. Even if donating 10% of your income would cause you to overshoot your goal by, say, 2 years and retire at age 31 instead of age 29, is that really a flaw in the concept of donating 10% of your income to help the worldâs poorest people or animals in factory farms? If it is correct to think of extremely early retirement as a kind of luxury good, then is it all that different for someone to say the 10% pledge asks too much because it would require them to retire at 31 instead of 29 than it would be for someone to say the pledge asks too much because they want to buy a $600,000 Lamborghini? Iâm not passing judgment on anyoneâs personal choices, but I am questioning if itâs a valid criticism of the GWWC pledge that it might be incompatible with some people acquiring certain luxury goods reserved for the wealthiest 1% of people in high-income countries. So what if it is? Why is that a problem? Why should people in EA want to change that?
But in any case, itâs up to you to decide what percentage you want to donate out of your current income or your investment income after you retire early. If 10% is too onerous, you can donate less than 10%. You could put whatever you expect your income during retirement to be in The Life You Can Saveâs calculator and see if you think that would be an amount youâd be comfortable giving after you retire. Every additional dollar donated is a better outcome than one dollar less than that being donated. So, just think about what you want to donate, and donate that.
People in EA already do tend to think in marginal terms and to wonder what the equivalent of the Laffer curve for effective altruism might be. Nobody has ever gotten this down to an economic science, or anything close, but itâs something people have been thinking about and talking about for a long time. My general impression is that most people in EA have been very open to people coming into EA with various levels of commitment, involvement, or donating.
The only real counterexample I can think of this is when one person who has since (I believe) disassociated themselves from EA argued in defense of the parent organization of the Centre for Effective Altruism purchasing Wytham Abbey. Their argument was that itâs all the better if normal people find this repugnant, since it signals (or countersignals) that EA has weird ideas and morals, and this helps attract the weird people that EA needs to attract to, I donât know, solve the problems with technical AI alignment research and save the world. I find this ridiculous and quite a troubling way to think, and Iâm glad most people in EA seem to disagree with this view on the Wytham Abbey purchase, and with this kind of view in general about signaling (or countersignaling) correctly so as to attract only the pure minds EA needs.
Maybe thereâs still some of that going around, I donât know, maybe thereâs a lot of it, but somehow I keep the impression that most people in EA arenât into gatekeeping or purity of that kind. On the other hand, Iâm only really thinking here about joining the movement at the entry level, and if you want a job at an EA organization or something like that, people will probably start to gatekeep and apply purity tests.