Pronouns: she/âher or they/âthem.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now Iâm trying to figure out where effective altruism can fit into my life these days and what it means to me.
Yarrowđž
I find it hard to understand what this post is trying to say. The title is really attention-catching, but then the body of the post is pretty inscrutable.
When I read posts like this, I have a hard time telling if the author is deliberately trying to obscure what theyâre saying in order to not be too blunt or too rude or whatever (maybe they think they wonât seem smart enough if they write too plainly?), or if thatâs just their writing style. Either way, I donât like it.
This is a Forum Team crosspost from Substack.
What does this mean? Is the author of this post, Matt Reardon, on the EA Forum team? Or did a moderator/âadmin of the EA Forum crosspost this from Matt Reardonâs Substack, under Mattâs EA Forum profile?
I completely feel the same way that racism and sympathy toward far-right and authoritarian views in effective altruism is a reason for me to want to distance myself from the movement. As well as people maybe not agreeing with these views but basically shrugging and acting like itâs fine.
Hereâs a point I havenât seen many people discuss:
...many people could have felt betrayed by the fact that EA leadership was well aware of FTX sketchiness and didnât say anything (or werenât aware, but then maybe youâd be betrayed by their incompetence).
What did the EA leadership know and when did they know it? About a year ago, I asked in a comment here about a Time article that claims Will MacAskill, Holden Karnofsky, Nick Beckstead, and maybe some others were warned about FTX and/âor Sam Bankman-Fried. I might have missed some responses to this, but I donât remember ever getting a clear answer on this.
If EA leaders heard credible warnings and ignored them, then maybe that shows poor judgment. Hard to say without knowing more information.
Most people who do linkposts to their own writing put the link and also include the full text.
More people will read this if you put the full text of the post here.
Thanks for this post! Your forum bio says youâre a professional economist at the Bank of Canada, so that makes me trust your analysis more if you were just a random layperson.
I donât know if youâre interested in creating a blog or a newsletter, but it seems like this analysis should be shared more widely!
It seems in a lot of cases you have disagreed with concepts before understanding them fully. Would you agree? And if so, why do you think this happened here, where Iâm sure that you are great at making evidence-based judgements in other areas?
This comes across as passive-aggressive. Neelâs patient response below is right on the money.
If I recommend a book to someone on the EA Forum (or any forum), thereâs a slim chance theyâre going to read that book. The only way thereâs going to be a realistic chance theyâll read it is either if I said something so interesting about it that it got them curious or if they were already curious about that topic area and decided the book is up their alley.
The same idea applies, to varying extents, to any other kind of media â blog posts, papers, videos, podcasts, etc.
A few of your other comments also contain stuff that comes across as passive-aggressive. (Particularly the ones that have zero or negative karma.)I can empathize with your position in that I can understand what itâs like to try to engage with people who have really different perspectives on a topic that is important to me, and that this often feels frustrating.
All I can say is that if your goal is persuasion or to have some kind of meeting of the minds, then saying stuff like this just pushes people further away.
The types of radical feminism you are mentioning in your first three bullet points are not types that I or the people or organisations I am mentioning would associate with. These groups are often labelled as Trans- or Sex worker- exclusionary radical feminists. It is a shame that they use this label too. They are generally funded by far right groups and instrumentalised to make it seem that they represent the feminist movement as a whole, or womenâs interests more broadly.
I think youâre whitewashing the history of radical feminism a bit here. I think the radical feminist movement has to own these mistakes in order to move on from them. To say something like âthatâs not real radical feminismâ or âthatâs a false flag operationâ is not to acknowledge the reality of what happened and the harm that was done. For example, the pornography ban I mentioned was supported by key figures in radical feminism.
The fourth bullet point I hadnât heard about, and that Contrapoints video has been on my watch list for a long time now- I should really watch it!
If youâre a fan of ContraPoints too, then thatâs one thing we can agree on! Her videos are wise, perspicacious, funny, and visually beautiful. I think they should win awards. Iâm a huge fan.
Could you give a little more context on what you donât understand? Iâm not sure I can see the same issues, at least at the moment
I read to read Pleasure Activism in part because the idea of âpleasure activismâ sounded interesting to me. I wondered, is the idea to make activism more fun? More guided toward things that are emotionally rewarding, rather than all about pain and discomfort and altruistic self-sacrifice? Or, alternatively, is it about fighting for things that bring us pleasure and joy?
The book does not really explain this. It does not really explain what pleasure activism is, at least not in a way I could make any sense of. Iâm not alone in this, since I asked my friend who recommended the book to me if he understood what adrienne maree brown was trying to say, and he basically said no.
When I tried to read Pleasure Activism, I wanted to see if anyone could make more sense of it than me. One of the reviews I found, from a sympathetic reviewer,[1] was generally positive, but also called out how confusing the book is. (It also mentions adrienne maree brownâs claim that she was bitten by a vampire.)
To me, if you write a book about a new idea and you donât explain what that idea is in a way thatâs easy to understand, your book has failed as a piece of scholarship. If I canât understand what youâre trying to say, and especially if you donât even try particularly hard to explain it, then thereâs nothing I can do with your work. It canât affect me. It canât cause me to think or act differently. I canât engage with it. I canât even disagree with it, because I donât know what I would be disagreeing with.
One thing Iâll say for now is that there are certainly parts of the feminist movements that you will strongly disagree with, and that disagreement is welcome.
I am a feminist and I have a good grasp on feminist theory, partly because I took courses on feminist theory when I was in university. I already articulated four key points I disagree with many radical feminists about â trying to harm trans people, banning pornography, opposing decriminalization or legalization of sex work, and narrow views on what kinds of sex are ethical. Especially nowadays, I would guess there are some people who call themselves radical feminists who have different views on these topics (and this seems to be what youâre saying). But I also probably disagree with those people as well, for example on economic issues.
Some parts of the radical feministsâ critiques were correct. They were correct to focus on many of the social and cultural phenomena that contribute to womenâs oppression (although they made some serious mistakes here, too, as I mentioned), going beyond a focus just on formal, legal equality (which is important, of course, but too limited). For example, the radical feminist critique of rape culture was hugely important.
It seems to me a lot of radical feministsâ critiques have been absorbed into the mainstream in a way that didnât feel true (or nearly as true) 15 years ago.
This is a good thing for the mainstream, since the critiques that were absorbed are correct, but it also makes self-identified âradical feministsâ today less relevant, since their good ideas are now a part of mainstream feminism â and feminism, in general, is more a part of mainstream culture â and what radical feminists have to offer is now less differentiated from mainstream feminism.I agree that authoritarian communism is bad, but I have a lot more belief in degrowth. Could you give some more specifics on what your issues are with it?
Economic degrowth is the idea that we should make the world significantly poorer (i.e. significantly decrease the worldâs total income) for environmental reasons. I think this would be a humanitarian catastrophe on a scale thatâs hard to fathom and I donât think it would even be particularly helpful for achieving environmental goals, and might even do harm, ultimately.
For example, if we could snap our fingers and cut the worldâs consumption of fossil fuels in half, probably millions of people would starve or die from otherwise preventable deaths. And our progress on climate change might end up getting set back since the worldâs economy would be so crippled, it would be hard to do things like fund R&D into wind, solar, geothermal, nuclear, energy storage, and other sustainable energy technologies or to make long-term capital investments into deploying these technologies.
If you want to read a more in-depth critique of the idea of economic degrowth, Kelsey Piper at Vox wrote one that clearâs and accessible.One of the most succinct and eloquent critiques of degrowth I have read comes from a review of Naomi Kleinâs book This Changes Everything:
The second, incredibly risky response to the climate crisis that she [Naomi Klein] recommends is a policy of âdegrowthâ (88). This is sort of a euphemism for reducing the size of GDP, which in practice means creating a policy-induced, long-term recession, followed (presumably) by measures designed to restrict the economy to a zero-growth equilibrium. Now because she plans to shift millions of workers into low-productivity sectors of the economy (126-7), and perhaps reduce work hours (93), she imagines that this degrowth can happen without creating any unemployment. So the picture presumably is one in which individuals experience a slow, steady decline in real income, of perhaps 2% per year over a period of 10 years (none of the people recommending this seem to give specific numbers, so Iâm just guessing what they have in mind), followed by permanent income stagnation. (There would, presumably, still be technological change, so a degrowth policy would have to be accompanied by some mechanism to ensure that work hours were cut back in response to any increase in productive efficiency, in order to ensure that production as a whole did not increase.)
At the same time that incomes are either shrinking or remaining stagnant, Klein also proposes an enormous shift from private-sector to public-sector consumption, presumably financed by significant increases in personal income tax. Again, she doesnât give any specific numbers, but from the way she talks it sounds like she wants to shift around a quarter of the remaining GDP. Plus she wants to see a huge amount of redistribution to the poor. So again, just ballparking, but it sounds as though she wants the average person to accept a pay cut of around 20%, followed by the promise of no pay increase ever again, combined with an increase in average income tax rates of around 25% (so in Canada, from around 30% to 55%). And donât forget, this is all supposed to be achieved democratically. As in, people are going to vote for this, not just once, but repeatedly.
What I find astonishing about proponents of âdegrowthâ â not just Klein, but Peter Victor as well â is that they donât see the tension between this desire to reduce average income and the desire to reduce economic inequality. They expect people to support increased redistribution at the same time that their own incomes are declining. This leaves me at something of a loss â I struggle to find words to express the depth of my incredulity at this proposition. In what world has this, or could this, ever occur?
In the real world, economic recessions are rather strongly associated with a significant increase in the nastiness of politics. Economic growth, on the other hand, makes redistribution much easier, simply because the transfers do not show up as absolute losses to individuals who are financing them, but rather as foregone gains, which are much more abstract. Itâs not an accident that the welfare state was created in the context of a growing economy. (See Benjamin Friedman, The Moral Consequences of Economic Growth, for a general discussion of the effect of growth on politics.) It seems to me obvious that a degrowth strategy â by making the economy negative-sum â would massively increase resistance to both taxation and redistribution. At the limit, it could generate dangerous blow-back, in the form of increased support for radical right-wing parties.
As a result, I just donât see any moral difference between what Klein is doing in this book and what the geoengineering enthusiasts are doing. The latter are techno-utopians, while Klein is a socialist-utopian. But both are trying to pin our hopes for resolving the climate crisis on a risky, untested, and potentially dangerous policy. Furthermore, the idea that Kleinâs agenda could be achieved democratically strikes me as being otherworldly, in a country where the left canât even figure out how to get the Conservative party out of power.
The âshadowâ of degrowth is environmental authoritarianism, in which âthe hardest choices require the strongest willsâ (to quote a villain), and so, the ability of people to resist unpopular policies that make them poorer needs to be quashed with force.
Some people go in the opposite direction and, rather than âbiting the bulletâ and endorsing an ugly conclusion, lean into cognitive dissonance and try to say that degrowth is not really about negative GDP growth, after all, but about⊠something they either have a hard time making clear, or that just doesnât make sense, or ends up amounting to green growth (the opposite of degrowth), or ends up undermining their claim that degrowth is not about negative GDP growth.
Your concluding comments seem like rage-bait and might be an unnecessary addition to your otherwise very thoughtful reply.
Itâs not rage bait, itâs just rage. I have deep exposure to radical leftist ideas, spanning about 15 years, and Iâm just fed up with so much of it. I think so much of radical leftist discourse is incoherent (like Pleasure Activism), insane (like degrowth), or evil (like the level of praise or apologetics for authoritarian communism you see in radical leftist communities). And the way that radical leftists try to advance their ideas is often cruel and sadistic, for example, by harassing or bullying people who express disagreement (and sometimes by endorsing physical violence).[2] I am angry at the radical left for being this way.
I have been as much of an insider to radical leftism as itâs possible to be. I know the ins and outs. My perspective does not come from a shallow gloss of radical leftism, but from a deep familiarity.
I think probably one of the most effective ways to limit the harm caused by the radical left as it currently exists is to try to fill the vacuum of liberal, progressive, centre-left, or leftist ideas for structural reform.
One of the most encouraging examples Iâve seen is the economist Thomas Pikettyâs short political manifesto at the end of his book Capital and Ideology. The manifesto is the final chapter of the book, titled âElements for a Participatory Socialism for the Twenty-First Centuryâ. This is the most coherent, most sane, and most constructive version of radical leftist economic thought (if it is accurate to call it radical leftist) I have ever seen. More of this, please!
I am also reading (almost finished) Ezra Klein and Derek Thompsonâs book Abundance, which just came out this year. Itâs awesome. I am sold on the idea of âabundance liberalismâ, which started out being called âsupply-side progressivismâ, but now has a much better name and has probably also expanded a bit in terms of the ideas it encompasses.
As much as Iâm fed up with so much about the radical left as it exists today, just complaining about the radical left probably isnât a good strategy for changing things for the better. We should come up with good, constructive ideas to draw people away from bad, destructive ideas and to take the energy away from bad, destructive political discourse.
The importance of any of my criticisms (of radical feminism, of radical leftism, of degrowth) pales in comparison to the importance of coming up with and advocating for good ideas that can offer an alternative. This is hard work and itâs where I want to put more of my focus going forward.
I donât know how much energy I have to continue this thread of conversation, so if you decide to reply, please do so with the warning that I may not read your reply or respond to it. I can get really into writing stuff on the EA Forum, but it takes up a lot of my time and energy, and I have to prioritize.
- ^
Iâm not clear on this, but I think the person who wrote the review even works at the radical leftist publishing company, AK Press, that published the book.
- ^
The phrase âthe cruelty is the pointâ has been used as a criticism of Donald Trump and the Republican Party under his leadership, but it would also apply aptly to a lot of radical leftistsâ behaviour.
- ^
I canât shake off the feeling that this type of argument has often aged poorly when it comes to AI. Iâve certainly been baffled many times by AI solving tasks that I predicted to be very hard.
This may be true for games like chess, go, and StarCraft, or for other narrow tests of AI. But for claims that AI will do something useful, practical, and economically valuable â like driving cars or replacing humans on assembly lines â the opposite is true. The predictions about rapid AI progress have been dead wrong and the AI skeptics have been right.
From the 2019 announcement:
Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.
I remember OpenAI or Sam Altman saying that in subsequent funding rounds, the profit cap would decrease, eventually reaching 2x. But I canât find a source for this right now. (Even if I am remembering correctly, who knows if OpenAI ever actually lowered the cap below 100x.)
Now that Iâm thinking about it more, the 100x profit cap was always too low. If OpenAIâs valuation in 2019 was $3 billion or less, then now, with a valuation of $300 billion, those first-round investors would already hit the cap on their returns.
It seems like OpenAI was already trying to rectify this problem before its recent announcement. In 2023, The Economist reported that:
Profits for investors in this venture were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025). Any profits above the cap flow to the parent non-profit.
If the purpose of the profit cap (or return on investment cap) is to limit OpenAI investors from having an obscene level of ownership over the wealth generated by AGI, this this makes a lot more sense. (Iâm assuming that the profit cap is abandoned now and this is a moot point, but I find it interesting to think about anyway.)
If OpenAI had stuck to the plan of increasing the 100x profit cap by 20% year every year starting in 2025, hereâs what the profit cap would be in future years.[1]
2030: 250x2035: 620x
2040: 1,540x
2045: 3,830x
2050: 9,540x
2055: 23,740x
2060: 59,070x
2065: 146,980x
2070: 365,730x
As time goes on, eventually the number gets too big, but even 365,730x is not a totally unprecedented return on investment in the pre-AGI world. Mike Markkulaâs angel investment in Apple would have had a return on investment (ROI) of over 3,000,000x had he retained his shares from the beginning until the 2020s.
If you look up lists of the stocks that have grown the most from IPO to their all-time high, or the venture capital investments that have had the best ROI, you see some numbers in the 1,000x to 10,000x range. So, a 10,000x cap would not be unreasonable.
If you think the amount of wealth generated by AGI will be essentially unlimited and defy calculation by conventional standards, then it shouldnât be a problem to have a cap of 10,000x or even 100,000x or 1,000,000x, since that would still end up being a small percentage of the overall amount of wealth generated by AGI.
- ^
I used this compound interest calculator to figure this out: https://ââwww.investor.gov/ââfinancial-tools-calculators/ââcalculators/ââcompound-interest-calculator I rounded these numbers to the nearest ten.
- ^
If you zoom out and think of effective altruism as a movement in favour of charity effectiveness and rigorous evaluations of charity, and in favour of giving more to charity than people typically give, then whether these ideas persist and grow is a different question than whether the term âeffective altruismâ or organizations like the Centre for Effective Altruism fall into decline.
The Gates Foundation, for example, precedes the term âeffective altruismâ and embodies some of the same ideas and a similar intellectual spirit as effective altruism.
GiveWell, somewhat surprisingly, for whatever reason, isnât really associated (as least, it doesnât seem to me like it is) with the effective altruist âbrandâ. Maybe Iâm wrong, but I could see GiveWell continuing to operate and maintain a decent amount of popularity long after a hypothetical decline and fall of things explicitly called âeffective altruismâ.
There is a version of effective altruism we could maybe call âEA exceptionalismâ or âmessianic effective altruismâ, which has existed for a long time (at least 10 years) and has never made sense. This is the view that effective altruism is somehow unlike or apart from all other efforts to help the world, that it is has a unique power to see the truth and solve the worldâs problems, and that in some sense the worldâs fate depends on effective altruism. Thatâs a crazy view and if it dies, good riddance.
We also have to ask ourselves if the effective altruism movement (the movement explicitly calling itself âeffective altruismâ) ever fully made sense or ever had a fully coherent version of what it was or what it was for. Thereâs a weird mix of things in EA â charity effectiveness in the global poverty cause area, veganism, AGI doomsday prophecy, bizarre influences from the ârationalist communityâ, academic moral philosophy, and weird, miscellaneous stuff that defies simple categorization, partly because some of it is undefined and unformed. (What on Earth is âtruthseekingâ, for example? If âlongtermismâ is actually a novel idea, what does it actually tell us we should do differently?)
Maybe thatâs a mix of things that donât need to be together that should come apart again. Maybe this specific convergence of ideas and people and culture existed for a reason or a season, and that time has passed, and thatâs okay.
My advice is to adopt a beginnerâs mind and go back to basics. Does effective altruism, as a movement, still have a reason for existing? If so, what is that reason? If itâs a good enough reason to motivate you, personally, focus on that. Put your efforts into that.
Investing in long-term interventions in global health and global poverty that are expected to pay off over decades is incompatible with the idea that AGI will be created within 10 years and will have transformative effects on the world, greater than the effects of the Industrial Revolution, akin to a centuryâs worth of economic growth and a centuryâs worth of progress in STEM (and adjacent fields) in the subsequent 10 years, and only picking up steam from there. So, the two most important ideas in EA are actually at odds with each other. That doesnât make sense. Why are they sharing a movement?
I donât see what good it does to try to keep these incompatible ideas bound together in the same movement. That might be a deeper reason for EA to struggle going forward than anything to do with FTX.
Do those other meditation centres make similarly extreme claims about the benefits of their programs? If so, I would be skeptical of them for the same reasons. If not, then the comparison is inapt.
If I had developed a meditation program that I really thought did what Jhourney is claiming their meditation program does, I would not be approaching it this way. I would try to make the knowledge as widely accessible as I could as quickly as possible. Jhourney has been doing retreats for over two years. Whatâs the hold up?
Transcendental Meditation (TM)âs stated justification for their secrecy and high prices is that TM requires careful, in-person, one-on-one instruction. Whatâs Jhourneyâs justification for not making instructional videos or audio recordings that anyone can buy for, say, $70?
Could it be just commercial self-interest? But, in that case, why hasnât the jhana meditation encouraged them to prize altruism more? Isnât that supposed to be one of the effects?
Iâm willing to make some allowance for personal self-interest and for the self-interest of the business, of course. But selling $70 instructional materials to millions of people would be a good business. And the Nobel Peace Prize comes with both a $1 million cash prize and a lot of fame and acclaim. Similarly, the Templeton Prize comes with $1.4 million in cash and some prestige. There are other ways to capitalize on fame and esteem, such as through speaking engagements. So, sharing a radical breakthrough in jhana meditation with the world has strong business incentives and strong personal self-interest incentives. Why not do it?
The simplest explanation is that they donât actually have the âproductâ theyâre claiming to have. Or, to put it another way, the âproductâ they have is not as differentiated from other meditation programs as theyâre claiming and does not reliably produce the benefits theyâre claiming it reliably produces.
On the topic of shame and guilt, I really want to recommend what the emotions researcher Brené Brown says about the topic. The best, quickest way to understand what she has to say about shame and guilt is to watch her two TED Talks in release order.
The first talk, on vulnerability, only lightly touches on shame, but it provides context for the second talk, without which the second talk will make less sense.
The second talk, on shame, explicitly gets into shame and guilt, the differences between them, and the difference between their effects on behaviour.
Hereâs the core distinction, which she gives in the second talk:The thing to understand about shame is, itâs not guilt. Shame is a focus on self, guilt is a focus on behavior. Shame is âI am bad.â Guilt is âI did something bad.â ⊠Thereâs a huge difference between shame and guilt.
And hereâs what you need to know. Shame is highly, highly correlated with addiction, depression, violence, aggression, bullying, suicide, eating disorders.And hereâs what you even need to know more. Guilt, inversely correlated with those things. The ability to hold something weâve done or failed to do up against who we want to be is incredibly adaptive. Itâs uncomfortable, but itâs adaptive.
I think thereâs probably such a thing as maladaptive guilt too. I vaguely remember BrenĂ© Brown briefly talking about this somewhere. If you feel guilt about something thatâs not your fault and you canât control, or if your guilt is way out of proportion to what you did wrong, then maybe those could be cases where guilt is maladaptive.
But most of the time people are saying âguiltâ when what theyâre talking about it shame â a focus on self. So, most of the problems people have with âguiltâ can actually be attributed to shame.
Further resources beyond the TED Talks:
-BrenĂ© Brownâs book I Thought It Was Just Me (but it isnât), about shame and shame resilience-BrenĂ© Brownâs audio program The Power of Vulnerability (you can find it on Audible), in which shame and shame resilience are a major topic
-A more textbook-style book that BrenĂ© Brown recommends (and which Iâve only read a bit of but which seems good), Shame and Guilt by June Price Tangney and Ronda L. Dearing, if you are interested in a more quantitative or more academic dive into the research
$1,295 is quite a steep price. Even with the $200 referral code discount, $1,095 is still a steep price.
What is the interactive or personalized aspect of the online âretreatsâ? Why couldnât they be delivered as video on-demand (like a YouTube playlist), audio on-demand (like a podcast), or an app like Headspace or 10% Happier?
From some poking around, I found that Jhourney has been doing retreats for at least 2 years, and possibly longer. Itâs hard to believe that the following could be true:
-That around 40% of participants have a transformative experience (about 66% of participants say they experienced a jhana and about 60% of that 66% say it was the best thing to happen to them in at least the past six months).
-That the people who have a transformative experience also have some sort of lasting, sustainable improvement to their lives long-term.
-That Jhourneyâs way of teaching meditation is so much different from and better than other ways of teaching meditation that have been broadly accessible for years â such as apps like Headspace or any number of meditation teachers or retreats that exist seemingly in (or near) every major city in North America â that it produces transformative experiences and sustained life improvement at a much higher rate.
This might be more believable if Jhourney had just developed this program and tried it out for the first time. But, as I said, they have been doing retreats for at least 2 years. It seems dubious the results could be this good without making more of a splash.
It also stokes the fires of my skepticism that this allegedly transformative knowledge is kept behind a $1,295 paywall. If Jhourneyâs house blend of jhana meditation makes you more altruistic, why wouldnât the people who work at Jhourney try to share it widely with the world? Thatâs what I would do if I had developed a meditation program that I thought was really producing these sorts of results.
Maybe I would still need to charge something for it rather than make it completely free. A 1-year Headspace subscription costs $70. Maybe something in that ballpark.
Jhourney reminds me of Transcendental Meditation (TM), which charges $1,400 for meditation instruction that â from what I hear â is not very differentiated from what you can get for free or cheap. TM also makes extreme claims about the kinds of results it produces for people.
My impression of TM is that itâs basically a scam. They are secretive, charge an inordinate amount of money, donât seem to produce better results than what you can get from Headspace or your typical local meditation teacher, and make claims about the benefits of the practice that far exceed the actual benefits.
Iâm inclined to believe that Jhourney is similar. People do have transformative experiences â with meditation, with spiritual retreats, with pilgrimages like the Camino de Santiago (or secular walks like the Pacific Coast Trail), with religion, with psychedelics, with therapy, with all sorts of things â but thatâs different from what Jhourney seems to be claiming. Again, what Iâm specifically skeptical of is:
-That a high percentage of people (e.g. 40%) will have a transformative experience.
-That this transformative experience or impression of having a transformative experience will lead to positive long-term life changes.
-That the percentage of people who experience something transformative, the magnitude of the transformative experience, or the long-term effects marks a radical departure from the experiences people have been having for decades in North America with meditation, psychedelics, and therapy.
In addition to meditation and the other normal things, I have tried all kinds of weird things like nootropics, hypnosis/âhypnotherapy, and binaural beats. I am open to trying weird things. Another way to put it is that Iâm sort of an âeasy markâ for self-help fads.
So, when I read this post I was tempted to believe that Jhourney had invented a non-pharmacological version of the Limitless pill. But, for the reasons I just gave, Jhourneyâs narrative doesnât add up for me.
When they release the $70 app, maybe Iâll try it then.
There are a few people who support both effective altruism and radical leftist politics who have written about how these two schools of thought might be integrated. Bob Jacobs, the former organizer of EA Ghent in Belgium, is one. You might be interested in his blog Collective Altruism: https://ââbobjacobs.substack.com/ââ
Another writer you may be interested in is the academic philosopher David Thorstad. I donât know what his political views are. But his blog Reflective Altruism, which is about effective altruism, has covered a few topics relevant to this post, such as billionaire philanthropy, racism, sexism, and sexual harassment in the effective altruist movement: https://ââreflectivealtruism.com/ââpost-series/ââ
There is also a pseudonymous EA Forum user called titotal whose politics seem leftist or left-leaning. They have written some criticisms of certain aspects of the EA movement both here on the forum and on their blog: https://ââtitotal.substack.com/ââ
I donât know if any of the people I just mentioned wholeheartedly support radical feminism, though. Even among feminists and progressives or leftists, the reputation of radical feminism has been seriously damaged through a series of serious mistakes, including:
Support for the oppression of and systemic violence and discrimination against trans people[1]
Support for banning pornography[2]
Opposition to legalizing or decriminalizing sex work[3]
Arguing that most sex is unethical[4]
Iâm vaguely aware that probably some radical feminists today take different stances on these topics, and probably there have historically been some radical feminists who have disagreed with these bad opinions, but the movement is tarnished from these mistakes and it will be difficult to recover.
In my experience, people who have radical leftist economic views are generally hostile to the idea of people in high-income countries donating to charities that provide medicine or anti-malarial bednets or cash to poor people in low-income countries. Itâs hard for me to imagine much cooperation or overlap between effective altruism and the radical left.
Effective altruism was founded as a movement focused on the effectiveness of charities that work on global poverty and global health. A lot of radical leftists â Iâd guess the majority â fundamentally reject this idea. So, how many radical leftists are realistically going to end up supporting effective altruism? (Iâm talking about radical leftists here because most radical feminists and specifically some of the ones you mentioned also have radical leftist economic and political views.)
Finally, although there are many important ideas in radical feminist thought that I think anyone â including effective altruists â could draw from, there is also a large amount of low-quality scholarship and bad ideas to sift through. I already mentioned some of the bad ideas. One example of low-quality scholarship, in my opinion, is adrienne maree brownâs book Pleasure Activism. I tried to read this book because it was recommended to me by a friend.
To give just one example of what I found to be low-quality scholarship, adrienne maree brown believes in vampires, believes she has been bitten by a vampire, and has asked for vampires to turn her into a vampire.
To give another example, the book is called Pleasure Activism, but it does not give a clear definition or explanation of what the term âpleasure activismâ is supposed to mean. If you make a concept the title of your book, and you write a book that is nominally about that concept, then if I read your book, I should be able to understand that concept. Instead, the attempt to define the concept is too brief and too vague. This is the full extent of the definition from the book:Pleasure activism is the work we do to reclaim our whole, happy, and satisfiable selves from the impacts, delusions, and limitations of oppression and/âor supremacy.
Pleasure activism asserts that we all need and deserve pleasure and that our social structures must reflect this. In this moment, we must prioritize the pleasure of those most impacted by oppression.
Pleasure activists seek to understand and learn from the politics and power dynamics inside of everything that makes us feel good. This includes sex and the erotic, drugs, fashion, humor, passion work, connection, reading, cooking and/âor eating, music and other arts, and so much more.
Pleasure activists believe that by tapping into the potential goodness in each of us we can generate justice and liberation, growing a healing abundance where we have been socialized to believe only scarcity exists.
Pleasure activism acts from an analysis that pleasure is a natural, safe, and liberated part of life â and that we can offer each other tools and education to make sure sex, desire, drugs, connection, and other pleasures arenât life-threatening or harming but life-enriching.
Pleasure activism includes work and life lived in the realms of satisfaction, joy, and erotic aliveness that bring about social and political change.
Ultimately, pleasure activism is us learning to make justice and liberation the most pleasurable experiences we can have on this planet.
What is pleasure activism? After reading this, I donât know. Iâm not sure if adrienne marie brown knows, either.
To be clear, Iâm a feminist, Iâm LGBT, I believe in social justice, and Iâve voted for a social democratic political party multiple times. I took courses on feminist theory and queer studies when I was university and I think a lot of the scholarship in those fields is amazingly good.
But a lot of the radical left, to borrow a bon mot from Noam Chomsky, want to âlive in some abstract seminar somewhereâ. They have no ideas about how to actually make the world better in specific, actionable ways,[5] or they have hazy ideas they canât clearly define or explain (like pleasure activism), or they have completely disastrous ideas that would lead to nightmares in real life (such as economic degrowth or authoritarian communism).
This is fine if you want to live in some abstract seminar somewhere, if you want to enjoy an aesthetic of radical change while changing nothing â and if we can rely on no governments ever trying to implement the disastrous ideas like degrowth or authoritarian communism that would kill millions of people â but what if you want to help rural families in sub-Saharan Africa not get malaria or afford a new roof for their home or get vaccines or vitamins for the children? Then youâve got to put away the inscrutable theory and live in the real world (which does not have vampires in it).- ^
See the Wikipedia article on gender-critical feminism or the extraordinarily good video essay âGender Criticalâ by the YouTuber and former academic philosopher ContraPoints.
- ^
One ban was actually passed, but then overturned by a court.
- ^
I havenât read this article, but if youâre unfamiliar with this topic, at a glance, it seems like a good introduction to the debate: https://ââscholarlycommons.law.cwsl.edu/ââfs/ââ242/ââ
- ^
ContraPointsâ movie-length video essay âTwilightâ covers this topic beautifully. Yes, itâs very long, but itâs so good!
- ^
Hereâs a refreshing instance of some radical leftists candidly admitting this: https://ââ2021.lagrandetransition.net/ââen/ââconference-themes/ââ
EA should avoid using AI art for non-research purposes?
My strongest reason for disliking AI-generated images is that so often they look tacky, as you aptly said, or even disgustingly bad.
One of the worst parts of AI-generated art is that sometimes it looks good at a glance and then as you look at it longer, you notice some horribly wrong detail. Human art (if itâs good quality) lets you enjoy the small details. It can be a pleasure to discover them. AI-generated art ruins this by punishing you for paying close attention.
But thatâs a matter of taste.
What Iâm voting âdisagreeâ on is that the EA Forum should have a rule or a strong social norm against using AI-generated images. I donât think people should use ugly images, whether theyâre AI-generated or free stock photos. But leave it up to people decide on a case-by-case basis which images are ugly and donât make it a rule about categorically banning AI-generated images.
I am trying to be open-minded to the ethical arguments against AI-generated art. I find the discourse frustratingly polarized.
For example, a lot of people are angry about the supposed environmental impact of AI-generated art, but what is the evidence of this? Anytime Iâve tried to look up hard numbers on how much energy AI uses, a) itâs been hard to find clear, reliable information and b) the estimates Iâve found tend to be pretty small.
Similarly, is there evidence that AI-generated images are displacing the labour of human artists? Again, this is something Iâve tried to look into, but the answer isnât easy to find. There are anecdotes here and there, but itâs hard to tell if there is a broader trend that is significantly affecting a large number of artists.
Itâs difficult to think about the topic of whether artists should need to give permission for their images to be used for AI training or should be compensated if they are. There is no precedent in copyright law to cover this because this technology is unprecedented. For the same reason, there is no precedent in societal norms. We have to decide on a new way of thinking about a new situation, without traditions to rely on.
So, if the three main ethical arguments against AI-generated art are:
-It harms the environment
-It takes income away from human artists
-AI companies should be required to get permission from artists before training AI models on their work and/âor financially compensate them if they do
All three of these arguments feel really unsubstantiated to me. My impression right now is:
-Probably not
-Maybe? Whatâs the evidence?
-Maybe? I donât know. Whatâs the reasoning?
The main aesthetic argument against AI-generated art is of course:
-Itâs ugly
And I mostly agree. But those ChatGPT images in the Studio Ghibli style are absolutely beautiful. There is a 0% chance I will ever pay an artist to draw a Studio Ghibli-style picture of my cat. But I can use a computer to turn my cat into a funny, cute little drawing. And thatâs wonderful.
Iâm a politically progressive person. Iâm LGBT, Iâm a feminist, I believe in social justice, Iâve voted for a social democratic political party multiple times, and Iâve been in community and in relationship with leftists a lot. I am so sick of online leftist political discourse.
I am not interested in thinking and talking about celebrities all the time. (So much online leftist discourse is about celebrities.)
I donât want to spend that much time and energy constantly re-evaluating which companies I boycott and whether thereâs a marginally more ethical alternative.
I donât want every discussion about every topic to be polarized, shut down, moralized, and made into a red line issue where disagreement isnât tolerated. Iâm sick of hyperbolic analogies between issues like ChatGPT and serious crimes. (I could give an example I heard but itâs so offensive I donât want to repeat it.)
I am fed up with leftists supporting authoritarianism, terrorism, and political assassinations. While moralizing about AI art.
So, please forgive me if I struggle to listen to all of online leftistsâ complaints with the charity they deserve. I am burnt out on this stuff at this point.
I donât know how to fix the offline left, but Iâm personally so relieved that I donât use microblogging anymore (i.e., Twitter, Bluesky, Mastodon, or Threads) and that Iâve mostly extricated myself from online leftist discourse otherwise. Itâs too crazymaking for me to stomach.
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again, you have to face the wretched masses of humanity and say âme too, me too, me tooâ (and realize you are one of them).
I am a total believer in the second philosophy and a hater of the first philosophy. (Not because itâs easy, but because itâs right!) To the extent I care about effective altruism, itâs because of the second philosophy: expand the moral circle, value all lives equally, extend beyond national borders, consider non-human creatures.
When I see people in effective altruism evince the first philosophy, to me, this is a profane betrayal of the whole point of the movement.
One of the reasons (among several other important reasons) that rationalists piss me off so much is their whole worldview and subculture is based on the first philosophy. Even the word ârationalistâ is about being superior to other people. If the rationalist community has one founder or leader, it would be Eliezer Yudkowsky. The way Eliezer Yudkowsky talks to and about other people, even people who are actively trying to help him or to understand him, is so hateful and so mean. He exhales contempt. And it isnât just Eliezer â you can go on LessWrong and read horrifying accounts of how some prominent people in the community have treated their employee or their romantic partner, with the stated justification that they are separate from and superior to others. Obviously thereâs a huge problem with racism, sexism, and anti-LGBT prejudice too, which are other ways of feeling separate and above.
There is no happiness to be found at the top of a hierarchy. Look at the people who think in the most hierarchical terms, who have climbed to the tops of the hierarchies they value. Are they happy? No. Theyâre miserable. This is a game you canât win. Itâs a con. Itâs a lie.
In the beautiful words of the Franciscan friar Richard Rohr, âThe great and merciful surprise is that we come to God not by doing it right but by doing it wrong!â
(Richard Rohrâs episode of You Made It Weird with Pete Holmes is wonderful if you want to hear more.)
Okay. Thanks. I guessed maybe thatâs what you were trying to say. I didnât even look at the paper. Itâs just not clear from the post why youâre citing this paper and what point youâre trying to make about it.
I agree that we canât extrapolate from the claim âthe most effective charities at fighting diseases in developing countries are 1,000x more effective than the average charity in that areaâ to âthe most effective charities, in general, are 1,000x more effective than the average charityâ.
If people are making the second claim, they definitely should be corrected. I already believed you that youâve heard this claim before, but Iâm also seeing corroboration from other comments that this is a commonly repeated claim. It seems like a case of people starting with a narrow claim that was true and then getting a little sloppy and generalizing it beyond what the evidence actually supports.
Trying to say how much more effective the best charities are from the average charity seems like a dauntingly broad question, and I reckon the juice ainât worth the squeeze. The Fred Hollows Foundation vs. seeing eye dog example gets the point across.
Thank you for explaining. Kindness like this matters to me a lot, and it also matters a lot to me whether someone is aware that another person is in need of their kindness.
If AI is having an economic impact by automating software engineersâ labour or augmenting their productivity, Iâd like to see some economic data or firm-level financial data or a scientific study that shows this.
Your anecdotal experience is interesting, for sure, but the other people who write code for a living who Iâve heard from have said, more or less, AI tools save them the time it would take to copy and paste code from Stack Exchange, and thatâs about it.
I think AIâs achievements on narrow tests are amazing. I think AlphaStarâs success on competitive StarCraft II was amazing. But six years after AlphaStar and ten years after AlphaGo, have we seen any big real-world applications of deep reinforcement learning or imitation learning that produce economic value? Or do something else practically useful in a way we can measure? Not that Iâm aware of.
Instead, weâve had companies working on real-world applications of AI, such as Cruise, shutting down. The current hype about AGI reminds me a lot of the hype about self-driving cars that I heard over the last ten years, from around 2015 to 2025. In the five-year period from 2017 to 2022, the rhetoric on solving Level 4â5 autonomy was extremely aggressive and optimistic. In the last few years, there have been some signs that some people in the industry are giving up, such as Cruise closing up shop.
Similarly, some companies, including Tesla, Vicarious, Rethink Robotics, and several others have tried to automate factory work and failed.
Other companies, like Covariant, have had modest success on relatively narrow robotics problems, like sorting objects into boxes in a warehouse, but nothing revolutionary.
The situation is complicated and the truth is not obvious, but itâs too simple to say that predictions about AI progress have overall been too pessimistic or too conservative. (Iâm only thinking about recent predictions, but one of the first predictions about AI progress, made in 1956, was wildly overoptimistic.[1])
I wrote a post here and a quick take here where I give my other reasons for skepticism about near-term AGI. That might help fill in more information about where Iâm coming from, if youâre curious.
Quote: