My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isnāt a ridiculous amount, but itās a pretty broad audience to reach with one video, and itās not a particularly kind framing to EA. As far as criticisms go, itās not the worst, it starts with Peter Singerās thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote āhas an enormously well funded branch ⦠that is spending millions on hosting AI safety conferences.ā
I think thereās a lot to take from it. The first is in relation to @Bellaās argument recently that EA should be doing more to actively define itself. This is what happens when it doesnāt. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. Thatās what I assume drew many of us here to begin with. Itās interesting enough that when outsiders make videos like this, even when theyāre not the picture thatād weād prefer,[1] they will capture the attention of many. This video is a significant impression, but itās not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it.
The second is about zero-sum attitudes and leftismās relation to EA. In the comments, many views like this were presented:
@LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I donāt think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think thereās a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesnāt grapple enough with systemic change, but on the other hand that society would be dramatically better if more people took a more EA outlook towards alleviating suffering.
For me, Iām partial towards demonstrating virtue as one of the primary ways of showing that itās possible to create improvement without systemic change. If EAs are directly helping people out, with whatever they might have, it makes it harder to position yourself above people who are doing that. In particular, I keep hearing about GiveDirectly specifically as a way of doing this. When youāre directly giving money to people much poorer than yourself, thereās something to that that really canāt be ignored. Money is agency in todayās society, and when youāre directly giving someone money, thatās a form of charity that is much harder to interpret as paternalistic or narrow-sighted, itās just altruistic. GiveDirectly is already the benchmark by which GiveWell evaluates charities, itās worth emphasizing that even more within the movement and in our outreach efforts.
That isnāt to say I think it should supplant x-risk reduction and AI safety work, I think those are still extremely important and neglected in society at large, but EA as a whole has a fundamental issue with what it is if it wants to be a mass movement. A few months ago, I ran into a service worker who could not be regarded as an EA by any extent. But he was telling about this new charity heād heard about, GiveDirectly, and how giving to it felt like going around the charity industry and helping without working with existing power structures. In my opinion, people like this should form the core of a broader EA movement. I think itās possible to have a movement which is primarily based on the idea of doing good, where many members donate 1-10% of their income to charity, and engage with EA ideas somewhere roughly weekly and can be activated when they see something thatās clearly dangerous towards our long-term future. I think, to some extent, thatās what EA the movement should strive for. EA and 80k should be separate, and right now there is no distinction. @Mjreard a few months ago expressed this as EA needing fans (donors) rather than players (direct workers). We can and should work towards that world.
āWe should be suspicious of people who decide the most important thing to do is what they would have the most fun doing anywayā. (regarding AI safety).
I also am also suspicious about this, and suspect it to be a source of bias towards AI safety at the expense of other cause areas, regardless of the ātrueā importance of AI safety (FWIW I think its important),
Also I think sheās broadly right that EA is spending millions hosting AI safety conferences. I would imagine EAG Bay Area is over 50% AI safety focused, and millions is spent on that.
I also think saying AI safety is āparticularly well fundedā is a subjective call and I wouldnāt even say ābasically untrueā. Its not an unreasonable take given all the jobs in AI companies plus EA funded AI safety jobs out of labs. As a comparison Iām not sure what animal welfare spend vs. AI safety spend is but I imagine it wouldnāt be an order of magnitude higher?
Despite all this, I disagreed with much of what she said, but I would put this in the top 30% of EA criticism Iāve seen (not hard given how much dross there is out there).
And one should probably give some weight to limitations imposed by the mediumāa 3-minute video on a platform whose users are on average not known for having long attention spans.
For what itās worth, I would guess that though the āfunnessā of AI safety research, or maybe especially technical AI safety research, is probably a factor in determining how many people are interested in working on that, I would be surprised if itās a factor in determining how much money is allocated towards that as a field.
Thanks for the response, and to be honest itās something that Iād agree with too. Iāve edited my initial comment to better reflect whatās actually true. I wouldnāt call the EA Global that Iāve been to an āAI Safety Conference,ā but if Bay Area is truly different it wouldnāt surprise me. āWell-fundedā is also subjective, and I think itās likely that I was letting my reflexive defensiveness get in the way of engaging directly. That said, I think the broader point about it exposing a weakness in EA comms and the comments reflecting broad low-trust attitudes towards ideas like EA stand, and I hope people continue to engage with them.
My hobby horse around these parts has been that EA should be less scared about reaching out to the left (where Iām politically rooted), and thinking about what commonalities we have. This is something I have already seen in the animal welfare movement, where EAs are unafraid to work with existing vegan activism, and have done a good job of selling philanthropic funding to them, despite having large differences in opinion on the margins.
As you note, itās not unreasonable that EA looks very far left from some perspectives. GiveDirectly is about direct empowerment, and I would argue that a lot of global development work, especially economic development, can be anti-imperial and generally concord with Marxist ideas of the internationale. Some better outreach and PR management in these communities would go a long way in the same way that it has for the political centre-left, who seem to get lots more attention from EA.
Okay. I actually watched the TikTok. That shoulda been step 1 ā I committed the cardinal sin of commenting without watching. (My previous comment was more responding to the screenshotted comments, based on my past experience with leftist discourse on TikTok and Twitter.)
The TikTok is 100% correct. The creatorās points and arguments are absolutely correct. Every factual claim she makes is correct. The video is extremely reasonable, fair-minded, and even-handed. The creator is eloquent, perceptive, and clearly very intelligent. She comes across as earnest, sincere, kind, open-minded, and well-meaning. I really liked her brief discussion of Strangers Drowning. Just from this brief video, I already feel some fondness toward her. Based on this first impression, I like her.
If I still had a TikTok account, I would give video a like.
Her exegesis of Peter Singerās parable of the drowning child is really, really good ā quick, breezy, and straight to the point, in a way that should be the envy of any explainer. The only part that was a question mark for me was her use of the term āextreme utilitariansā. Itās not exactly inaccurate, though, and it does get the point across, so, now that Iām thinking about it, I guess itās actually fine. Come to think of it, if I were trying to explain this idea casually to a friend or an acquaintance or a general audience, I might use a similar phrase like āhardcore utilitariansā or something.
It isnāt a technical term, but she is referring to the extreme personal sacrifice some people will go through for their moral views, or people who take moral views to more of an extreme than the typical person will (even probably the typical utilitarian or the typical moral philosopher).
Her suspicion of the emotional motivations of people in EA who have pivoted from what tends to be more boring, humble, and sometimes gruelling work in global poverty to high-paying, sexy, glamorous, luxurious, fun, exciting work in AI safety is incredibly perceptive and just a really great point. I have said (and others have said) similar things in the past, and even so, the way she said it was so clear and perceptive that I feel I now better understand the point I was trying to make because she said it (and thought it) better. So, kudos to her on that.
I would say your instinct should not be to treat this as a PR or marketing or media problem, or to want to leap into the fray to provide a ācounternarrativeā. I would say this is actually just perceptive, substantive, eloquently expressed criticism or skepticism. I think the appropriate response is to take it a substantive argument or point.
There are many things people in EA could do if they wanted to do more to establish the credibility of AI safety for a wider audience or for mainstream society. Doing vastly more academic publishing on the topic is one idea. People are right not to take seriously ideas only written on blogs, forums, Twitter, or in books that donāt go through any more rigour or academic review than the previous three mediums. Science and academia provide a blueprint for how to establish mainstream credibility of obscure technical ideas.
Iām sure there are other good ideas out there too. For example, why not get more curious about why AI safety critics, skeptics, and dissenters disagree? Why not figure out their arguments, engage deeply, and respond to them? This could be in informal mediums and not through academic publishing. I think it would be a meaningful step toward persuasion. Itās kind of embarrassing for AI safety that itās fairly easy for critics and skeptics to lob up plausible-sounding objections to the AI safety thesis/āworldview and there isnāt really a convincing (to me, and to many others) response. Why not do the intellectual work, first, and focus on the PR/āmarketing later?
Something that would go a long way for me, personally, toward establishing at least a bit more good faith and credibility would be if AI safety advocates were willing to burn bad arguments that donāt make sense. For instance, if an AI safety advocate were willing to concede the fundamental, glaring flaws in AI 2027 or Situational Awareness, I would personally be willing to listen to them more carefully and take them more seriously. On the other hand, if someone canāt acknowledge that this is an atrocious, ridiculous graph, then I sort of feel like I can safely ignore what they say, since overall they havenāt demonstrated to me a level of seriousness, credibility, or reasonableness that I would feel is needed if itās going to be worthwhile for me to engage with their ideas.
Right now, whatever the best arguments in AI safety are, it feels like theyāre all lumped in with the worst arguments, and itās hard for me not to judge it all based on the worst arguments. I imagine this will be a recurring problem if AI safety tries to gain more mainstream, widespread acceptance. If like 10% of people in EA were constantly talking about how great homeopathy is and is and how itās curing all their ailments, and how foolish the medical and scientific establishment is for saying itās just a placebo, would you be as willing to take EA arguments about pandemic risk seriously? Or would you just figure that this community doesnāt know what itās talking about? Thatās the situation for me with AI safety, and Iām sure others feel the same way, or would if they encountered AI safety ideas from an initial position of reasonable skepticism.
Those are just my first 2-3 ideas. Other people could probably brainstorm others. Overall, I think the intellectual work is lacking. More marketing/āPR work would either fail or deserve to fail (even if it succeeded), in my view, because the intellectual foundation isnāt there yet.
I actually share a lot of your read here. I think it is actually a very strong explanation of Singerās argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal.
My one quibble with the videoās content is the āextreme utilitariansā framing; as Iām one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. Itās a pretty minor issue though.
I think that the video is still worth providing a counter-narrative to though, and I think thatās actually going to be my primary disagreement. For me, that counter-narrative isnāt that EA is perfect, but that taking a principled EA mindset towards problems actually leads towards better solutions, and has lead to a lot of good being done in the world already.
The issue with the video, which I shouldāve been more explicit about in the original comment, is that when taken in the context of TikTok, it acts as a reinforcement to people who think that you canāt try to make the world better. She presents a vision of EA where it initially tried to do good (while not mentioning any of the good it actually did, just the sacrifices that people made for it), and then that it was corrupted by people with impure intentions, and now no longer does.
Regardless of what you or I think of the AI safety movement, I think that the people who believe in it believe in it seriously, and got there primarily through reasoning from EA principles. It isnāt a corruption of EA ideas of doing good, just a different way of accomplishing them, though we can (and should) disagree on how the weighting of these factors plays out. And it primarily hasnāt supplanted the other ways that people within the movement are doing good, itās supplemented them.
When the first exposure of EA ideas leads people towards the āthings canāt be betterā meme, thatās something that I think is worth combatting. I donāt think EA is perfect, but I think that thinking about and acting on EA principles really can help make the world better, and thatās what an ideal simple EA counter-narrative would emphasize to me.
I agree there should be a counter narrative. It is also important to realize that people who create, like, and comment on mean-spirited TikToks who are self-absorbed in their own misguided ideology are far enough from the target market that you really shouldnāt worry about changing their behavior.
Thatās the thing that gets me here: the TikTok itself is primarily not mean-spirited (I would reccomend watching it, itās 3 minutes, and it did make me cringe, but there was definitely a decent amount of thought put into it!) Some of the commenters are a bit mean-spirited, I wonāt deny, but some are also just jaded. The problem, to me, right now, is that the āthoughtful mediaā idea of EA, which to me this person embodies, says that EA has interesting philosophical grounding and also has a lot of weird Silicon Valley stuff going on. I think that content like this is exactly what we should be hoping to influence.
Good characterization; I should have watched the video. Seems like she may be unwilling to consider that the weird Silicon Valley stuff is correct, but explicitly says sheās just raising the question of motivated reasoning.
The āwriting scifi with your smart friendsā is quite an unfair characterization, but fundamentally on us to counter. I think it will all turn on whether people find AI risk compelling.
For that, thereās always going to be a large constituency scoffing. Thereās a level at which we should just tolerate that, but weāre still at a place where communicating the nature of AI risk work more broadly and more clearly is important on the margin.
The number Iāve seen people throw out a few times to estimate the number of people who identify with the effective altruism movement is 10,000, although I donāt know where that comes from. In one survey/āpoll I read (I think it was Pew or Gallup), 5% of Americans identify as being on the far left. 5% of the American population is 17 million.
If the American far left is going to change ideologically or culturally, it probably wonāt be because of anything the effective altruism movement does. Itās just too big in comparison. I think thereās a sense in which youāve just gotta resign yourself to the idea that many people on the far left will dislike effective altruism, insofar as they know anything about it, indefinitely into the future.
I think you have some interesting thoughts about messaging and outreach. For people who are concerned with paternalism or neocolonialism, or who are distrustful of charities, GiveDirectly is a great option. So, promoting GiveDirectly to people with these concerns seems like a good idea. I wonder also if explaining charities that do simple things like the Against Malaria Foundation giving bednets might be appealing to people, too. I feel like thatās so simple, itās hard to imagine it somehow being secretly evil.
Iām personally fairly worn out and discouraged from trying, over many years, to talk to far leftist friends, acquaintances, and members of various communities (online and local). Despite voting for a social democratic party and having many strongly socially progressive and economically progressive/āsocial democratic views, Iāve often had a hard time finding common ground with many people on the far left, to the extent that Iāve ended relationships with friends and acquaintances and left certain communities. Some of the views I hold that I was in several cases not able to find common ground on:
-Governments should be democratic rather than authoritarian
-It is morally unacceptable to commit terrorist attacks against civilians, or to murder your political enemies, and certainly not something to celebrate or glorify
-Joseph Stalin and Mao Zedong were brutal dictators and not praiseworthy or figures to celebrate in any way
I find this very discouraging and depressing, and sad, and infuriating, and scary, and disturbing. I donāt know what to do about it. I have no energy left for this kind of engagement, so Iām not the right person to ask. I guess Iām just trying to warn you about some of the sort of stuff you might encounter and find yourself having to argue with if you do go down this road of engaging with the far left.
Overall, I find that getting into politics or topical ādiscourseā on TikTok or Twitter pretty much just sucks up time, attention, energy, and emotional stamina without spitting anything back out (like a black hole). Thereās just an infinite amount of time-wasting and aggravation that can happen. And what good ever comes of it?
I wonder if thereās meaningfully such a thing as trying to make better TikTok videos or better tweets or if thatās like trying to make better cigarettes. I mean, in a sense, yes, you can obviously make better ones. There are lots of people who just do comedy videos on TikTok that I used to enjoy, and Hank Green does some good educational videos I see on YouTube Shorts. But I wonder if going in with the explicit intention of fighting discourse with discourse is going to get anywhere. (I commented on Bellaās quick take with my thoughts on this as well.)
(Please donāt interpret this as dismissive, I donāt mean it that way, but I thought about this comic.)
However, I would strongly wager that the majority of this sample does not believe in the three ideological points you outlined around authoritarianism, terrorist attacks, and Stalin & Mao (I think it is also quite unlikely that the people viewing the Tik Tok in question would believe these things either). Those latter beliefs are extremely fringe.
Two years ago, I thought these sort of ideas were way more fringe among the far left than I do now. I could just have terrible luck, but I encountered these sort of ideas way, way, way more than I ever expected I would. And it wasnāt just once or twice or with people all in the same social circle. It was at least nine different unconnected individuals or unconnected social circles/āsocial contexts/ācommunities where someone expressed support for at least one of these ideas. Since itās happened so many times, itās hard for me to write it off.
In the conversations Iāve had with friends I still have now and donāt endorse any of these extreme opinions, theyāve told me their experiences are similar to mine. So, still anecdotal, but still hard to just write off as just my bad luck.
I would find it comforting to see polling that found these to be truly fringe positions within the far left, so if anyone knows of any, please share it.
None of the nine examples Iām thinking of were algorithmic social media feeds (some were people I knew in real life, some were local people in my community posting online, some were small and semi-private online communities). However, algorithmic social media feeds tend to amplify extreme views. So, if you step into that arena, even if a minority of minority believes something (e.g. 10-20% of the far left which is 5% of the U.S. population, so 0.5-1% of Americans overall), it might get disproportionate attention (e.g. it might look like 10% of the overall American population believes it).
Overall, this is just a warning to anyone who wants to get into the fray of these sort of TikTok/āTwitter short-form algorithmic social media debates with the far left that it might be disconcerting and crazymaking. And a concern that this format/āmedium, in general, may just not be a productive way of changing peopleās minds about anything or having serious conversations.
Hey yāall,
My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isnāt a ridiculous amount, but itās a pretty broad audience to reach with one video, and itās not a particularly kind framing to EA. As far as criticisms go, itās not the worst, it starts with Peter Singerās thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote āhas an enormously well funded branch ⦠that is spending millions on hosting AI safety conferences.ā
I think thereās a lot to take from it. The first is in relation to @Bellaās argument recently that EA should be doing more to actively define itself. This is what happens when it doesnāt. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. Thatās what I assume drew many of us here to begin with. Itās interesting enough that when outsiders make videos like this, even when theyāre not the picture thatād weād prefer,[1] they will capture the attention of many. This video is a significant impression, but itās not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it.
The second is about zero-sum attitudes and leftismās relation to EA. In the comments, many views like this were presented:
@LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I donāt think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think thereās a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesnāt grapple enough with systemic change, but on the other hand that society would be dramatically better if more people took a more EA outlook towards alleviating suffering.
For me, Iām partial towards demonstrating virtue as one of the primary ways of showing that itās possible to create improvement without systemic change. If EAs are directly helping people out, with whatever they might have, it makes it harder to position yourself above people who are doing that. In particular, I keep hearing about GiveDirectly specifically as a way of doing this. When youāre directly giving money to people much poorer than yourself, thereās something to that that really canāt be ignored. Money is agency in todayās society, and when youāre directly giving someone money, thatās a form of charity that is much harder to interpret as paternalistic or narrow-sighted, itās just altruistic.
GiveDirectly is already the benchmark by which GiveWell evaluates charities, itās worth emphasizing that even more within the movement and in our outreach efforts.
That isnāt to say I think it should supplant x-risk reduction and AI safety work, I think those are still extremely important and neglected in society at large, but EA as a whole has a fundamental issue with what it is if it wants to be a mass movement. A few months ago, I ran into a service worker who could not be regarded as an EA by any extent. But he was telling about this new charity heād heard about, GiveDirectly, and how giving to it felt like going around the charity industry and helping without working with existing power structures. In my opinion, people like this should form the core of a broader EA movement. I think itās possible to have a movement which is primarily based on the idea of doing good, where many members donate 1-10% of their income to charity, and engage with EA ideas somewhere roughly weekly and can be activated when they see something thatās clearly dangerous towards our long-term future. I think, to some extent, thatās what EA the movement should strive for. EA and 80k should be separate, and right now there is no distinction. @Mjreard a few months ago expressed this as EA needing fans (donors) rather than players (direct workers). We can and should work towards that world.
The speaker says EA spends āmillions on AI safety conferences,ā which is pretty inaccurate though not 100% wrong, as that is EA Globalās budget where AI safety topics are a major discussion though not the only focus. She also says AI safety is āparticularly well-funded,ā which is basically untrue right now in the broader world, but isnāt pants-on-fire wrong in strictly the EA world.ā©ļøIāve retracted this section following @NickLaingās comment.I thought her main point was pretty good,.
āWe should be suspicious of people who decide the most important thing to do is what they would have the most fun doing anywayā. (regarding AI safety).
I also am also suspicious about this, and suspect it to be a source of bias towards AI safety at the expense of other cause areas, regardless of the ātrueā importance of AI safety (FWIW I think its important),
Also I think sheās broadly right that EA is spending millions hosting AI safety conferences. I would imagine EAG Bay Area is over 50% AI safety focused, and millions is spent on that.
I also think saying AI safety is āparticularly well fundedā is a subjective call and I wouldnāt even say ābasically untrueā. Its not an unreasonable take given all the jobs in AI companies plus EA funded AI safety jobs out of labs. As a comparison Iām not sure what animal welfare spend vs. AI safety spend is but I imagine it wouldnāt be an order of magnitude higher?
Despite all this, I disagreed with much of what she said, but I would put this in the top 30% of EA criticism Iāve seen (not hard given how much dross there is out there).
And one should probably give some weight to limitations imposed by the mediumāa 3-minute video on a platform whose users are on average not known for having long attention spans.
For what itās worth, I would guess that though the āfunnessā of AI safety research, or maybe especially technical AI safety research, is probably a factor in determining how many people are interested in working on that, I would be surprised if itās a factor in determining how much money is allocated towards that as a field.
Thanks for the response, and to be honest itās something that Iād agree with too. Iāve edited my initial comment to better reflect whatās actually true. I wouldnāt call the EA Global that Iāve been to an āAI Safety Conference,ā but if Bay Area is truly different it wouldnāt surprise me. āWell-fundedā is also subjective, and I think itās likely that I was letting my reflexive defensiveness get in the way of engaging directly. That said, I think the broader point about it exposing a weakness in EA comms and the comments reflecting broad low-trust attitudes towards ideas like EA stand, and I hope people continue to engage with them.
Yep 100% agree with the weakness in EA comms. Iām happy thereās been a fair amount of chat recently about this on the forum.
My hobby horse around these parts has been that EA should be less scared about reaching out to the left (where Iām politically rooted), and thinking about what commonalities we have. This is something I have already seen in the animal welfare movement, where EAs are unafraid to work with existing vegan activism, and have done a good job of selling philanthropic funding to them, despite having large differences in opinion on the margins.
As you note, itās not unreasonable that EA looks very far left from some perspectives. GiveDirectly is about direct empowerment, and I would argue that a lot of global development work, especially economic development, can be anti-imperial and generally concord with Marxist ideas of the internationale. Some better outreach and PR management in these communities would go a long way in the same way that it has for the political centre-left, who seem to get lots more attention from EA.
Okay. I actually watched the TikTok. That shoulda been step 1 ā I committed the cardinal sin of commenting without watching. (My previous comment was more responding to the screenshotted comments, based on my past experience with leftist discourse on TikTok and Twitter.)
The TikTok is 100% correct. The creatorās points and arguments are absolutely correct. Every factual claim she makes is correct. The video is extremely reasonable, fair-minded, and even-handed. The creator is eloquent, perceptive, and clearly very intelligent. She comes across as earnest, sincere, kind, open-minded, and well-meaning. I really liked her brief discussion of Strangers Drowning. Just from this brief video, I already feel some fondness toward her. Based on this first impression, I like her.
If I still had a TikTok account, I would give video a like.
Her exegesis of Peter Singerās parable of the drowning child is really, really good ā quick, breezy, and straight to the point, in a way that should be the envy of any explainer. The only part that was a question mark for me was her use of the term āextreme utilitariansā. Itās not exactly inaccurate, though, and it does get the point across, so, now that Iām thinking about it, I guess itās actually fine. Come to think of it, if I were trying to explain this idea casually to a friend or an acquaintance or a general audience, I might use a similar phrase like āhardcore utilitariansā or something.
It isnāt a technical term, but she is referring to the extreme personal sacrifice some people will go through for their moral views, or people who take moral views to more of an extreme than the typical person will (even probably the typical utilitarian or the typical moral philosopher).
Her suspicion of the emotional motivations of people in EA who have pivoted from what tends to be more boring, humble, and sometimes gruelling work in global poverty to high-paying, sexy, glamorous, luxurious, fun, exciting work in AI safety is incredibly perceptive and just a really great point. I have said (and others have said) similar things in the past, and even so, the way she said it was so clear and perceptive that I feel I now better understand the point I was trying to make because she said it (and thought it) better. So, kudos to her on that.
I would say your instinct should not be to treat this as a PR or marketing or media problem, or to want to leap into the fray to provide a ācounternarrativeā. I would say this is actually just perceptive, substantive, eloquently expressed criticism or skepticism. I think the appropriate response is to take it a substantive argument or point.
There are many things people in EA could do if they wanted to do more to establish the credibility of AI safety for a wider audience or for mainstream society. Doing vastly more academic publishing on the topic is one idea. People are right not to take seriously ideas only written on blogs, forums, Twitter, or in books that donāt go through any more rigour or academic review than the previous three mediums. Science and academia provide a blueprint for how to establish mainstream credibility of obscure technical ideas.
Iām sure there are other good ideas out there too. For example, why not get more curious about why AI safety critics, skeptics, and dissenters disagree? Why not figure out their arguments, engage deeply, and respond to them? This could be in informal mediums and not through academic publishing. I think it would be a meaningful step toward persuasion. Itās kind of embarrassing for AI safety that itās fairly easy for critics and skeptics to lob up plausible-sounding objections to the AI safety thesis/āworldview and there isnāt really a convincing (to me, and to many others) response. Why not do the intellectual work, first, and focus on the PR/āmarketing later?
Something that would go a long way for me, personally, toward establishing at least a bit more good faith and credibility would be if AI safety advocates were willing to burn bad arguments that donāt make sense. For instance, if an AI safety advocate were willing to concede the fundamental, glaring flaws in AI 2027 or Situational Awareness, I would personally be willing to listen to them more carefully and take them more seriously. On the other hand, if someone canāt acknowledge that this is an atrocious, ridiculous graph, then I sort of feel like I can safely ignore what they say, since overall they havenāt demonstrated to me a level of seriousness, credibility, or reasonableness that I would feel is needed if itās going to be worthwhile for me to engage with their ideas.
Right now, whatever the best arguments in AI safety are, it feels like theyāre all lumped in with the worst arguments, and itās hard for me not to judge it all based on the worst arguments. I imagine this will be a recurring problem if AI safety tries to gain more mainstream, widespread acceptance. If like 10% of people in EA were constantly talking about how great homeopathy is and is and how itās curing all their ailments, and how foolish the medical and scientific establishment is for saying itās just a placebo, would you be as willing to take EA arguments about pandemic risk seriously? Or would you just figure that this community doesnāt know what itās talking about? Thatās the situation for me with AI safety, and Iām sure others feel the same way, or would if they encountered AI safety ideas from an initial position of reasonable skepticism.
Those are just my first 2-3 ideas. Other people could probably brainstorm others. Overall, I think the intellectual work is lacking. More marketing/āPR work would either fail or deserve to fail (even if it succeeded), in my view, because the intellectual foundation isnāt there yet.
I actually share a lot of your read here. I think it is actually a very strong explanation of Singerās argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal.
My one quibble with the videoās content is the āextreme utilitariansā framing; as Iām one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. Itās a pretty minor issue though.
I think that the video is still worth providing a counter-narrative to though, and I think thatās actually going to be my primary disagreement. For me, that counter-narrative isnāt that EA is perfect, but that taking a principled EA mindset towards problems actually leads towards better solutions, and has lead to a lot of good being done in the world already.
The issue with the video, which I shouldāve been more explicit about in the original comment, is that when taken in the context of TikTok, it acts as a reinforcement to people who think that you canāt try to make the world better. She presents a vision of EA where it initially tried to do good (while not mentioning any of the good it actually did, just the sacrifices that people made for it), and then that it was corrupted by people with impure intentions, and now no longer does.
Regardless of what you or I think of the AI safety movement, I think that the people who believe in it believe in it seriously, and got there primarily through reasoning from EA principles. It isnāt a corruption of EA ideas of doing good, just a different way of accomplishing them, though we can (and should) disagree on how the weighting of these factors plays out. And it primarily hasnāt supplanted the other ways that people within the movement are doing good, itās supplemented them.
When the first exposure of EA ideas leads people towards the āthings canāt be betterā meme, thatās something that I think is worth combatting. I donāt think EA is perfect, but I think that thinking about and acting on EA principles really can help make the world better, and thatās what an ideal simple EA counter-narrative would emphasize to me.
I agree there should be a counter narrative. It is also important to realize that people who create, like, and comment on mean-spirited TikToks who are self-absorbed in their own misguided ideology are far enough from the target market that you really shouldnāt worry about changing their behavior.
Thatās the thing that gets me here: the TikTok itself is primarily not mean-spirited (I would reccomend watching it, itās 3 minutes, and it did make me cringe, but there was definitely a decent amount of thought put into it!) Some of the commenters are a bit mean-spirited, I wonāt deny, but some are also just jaded. The problem, to me, right now, is that the āthoughtful mediaā idea of EA, which to me this person embodies, says that EA has interesting philosophical grounding and also has a lot of weird Silicon Valley stuff going on. I think that content like this is exactly what we should be hoping to influence.
Good characterization; I should have watched the video. Seems like she may be unwilling to consider that the weird Silicon Valley stuff is correct, but explicitly says sheās just raising the question of motivated reasoning.
The āwriting scifi with your smart friendsā is quite an unfair characterization, but fundamentally on us to counter. I think it will all turn on whether people find AI risk compelling.
For that, thereās always going to be a large constituency scoffing. Thereās a level at which we should just tolerate that, but weāre still at a place where communicating the nature of AI risk work more broadly and more clearly is important on the margin.
The number Iāve seen people throw out a few times to estimate the number of people who identify with the effective altruism movement is 10,000, although I donāt know where that comes from. In one survey/āpoll I read (I think it was Pew or Gallup), 5% of Americans identify as being on the far left. 5% of the American population is 17 million.
If the American far left is going to change ideologically or culturally, it probably wonāt be because of anything the effective altruism movement does. Itās just too big in comparison. I think thereās a sense in which youāve just gotta resign yourself to the idea that many people on the far left will dislike effective altruism, insofar as they know anything about it, indefinitely into the future.
I think you have some interesting thoughts about messaging and outreach. For people who are concerned with paternalism or neocolonialism, or who are distrustful of charities, GiveDirectly is a great option. So, promoting GiveDirectly to people with these concerns seems like a good idea. I wonder also if explaining charities that do simple things like the Against Malaria Foundation giving bednets might be appealing to people, too. I feel like thatās so simple, itās hard to imagine it somehow being secretly evil.
Iām personally fairly worn out and discouraged from trying, over many years, to talk to far leftist friends, acquaintances, and members of various communities (online and local). Despite voting for a social democratic party and having many strongly socially progressive and economically progressive/āsocial democratic views, Iāve often had a hard time finding common ground with many people on the far left, to the extent that Iāve ended relationships with friends and acquaintances and left certain communities. Some of the views I hold that I was in several cases not able to find common ground on:
-Governments should be democratic rather than authoritarian
-It is morally unacceptable to commit terrorist attacks against civilians, or to murder your political enemies, and certainly not something to celebrate or glorify
-Joseph Stalin and Mao Zedong were brutal dictators and not praiseworthy or figures to celebrate in any way
I find this very discouraging and depressing, and sad, and infuriating, and scary, and disturbing. I donāt know what to do about it. I have no energy left for this kind of engagement, so Iām not the right person to ask. I guess Iām just trying to warn you about some of the sort of stuff you might encounter and find yourself having to argue with if you do go down this road of engaging with the far left.
Overall, I find that getting into politics or topical ādiscourseā on TikTok or Twitter pretty much just sucks up time, attention, energy, and emotional stamina without spitting anything back out (like a black hole). Thereās just an infinite amount of time-wasting and aggravation that can happen. And what good ever comes of it?
I wonder if thereās meaningfully such a thing as trying to make better TikTok videos or better tweets or if thatās like trying to make better cigarettes. I mean, in a sense, yes, you can obviously make better ones. There are lots of people who just do comedy videos on TikTok that I used to enjoy, and Hank Green does some good educational videos I see on YouTube Shorts. But I wonder if going in with the explicit intention of fighting discourse with discourse is going to get anywhere. (I commented on Bellaās quick take with my thoughts on this as well.)
(Please donāt interpret this as dismissive, I donāt mean it that way, but I thought about this comic.)
However, I would strongly wager that the majority of this sample does not believe in the three ideological points you outlined around authoritarianism, terrorist attacks, and Stalin & Mao (I think it is also quite unlikely that the people viewing the Tik Tok in question would believe these things either). Those latter beliefs are extremely fringe.
Two years ago, I thought these sort of ideas were way more fringe among the far left than I do now. I could just have terrible luck, but I encountered these sort of ideas way, way, way more than I ever expected I would. And it wasnāt just once or twice or with people all in the same social circle. It was at least nine different unconnected individuals or unconnected social circles/āsocial contexts/ācommunities where someone expressed support for at least one of these ideas. Since itās happened so many times, itās hard for me to write it off.
In the conversations Iāve had with friends I still have now and donāt endorse any of these extreme opinions, theyāve told me their experiences are similar to mine. So, still anecdotal, but still hard to just write off as just my bad luck.
I would find it comforting to see polling that found these to be truly fringe positions within the far left, so if anyone knows of any, please share it.
None of the nine examples Iām thinking of were algorithmic social media feeds (some were people I knew in real life, some were local people in my community posting online, some were small and semi-private online communities). However, algorithmic social media feeds tend to amplify extreme views. So, if you step into that arena, even if a minority of minority believes something (e.g. 10-20% of the far left which is 5% of the U.S. population, so 0.5-1% of Americans overall), it might get disproportionate attention (e.g. it might look like 10% of the overall American population believes it).
Overall, this is just a warning to anyone who wants to get into the fray of these sort of TikTok/āTwitter short-form algorithmic social media debates with the far left that it might be disconcerting and crazymaking. And a concern that this format/āmedium, in general, may just not be a productive way of changing peopleās minds about anything or having serious conversations.