Okay. I actually watched the TikTok. That shoulda been step 1 â I committed the cardinal sin of commenting without watching. (My previous comment was more responding to the screenshotted comments, based on my past experience with leftist discourse on TikTok and Twitter.)
The TikTok is 100% correct. The creatorâs points and arguments are absolutely correct. Every factual claim she makes is correct. The video is extremely reasonable, fair-minded, and even-handed. The creator is eloquent, perceptive, and clearly very intelligent. She comes across as earnest, sincere, kind, open-minded, and well-meaning. I really liked her brief discussion of Strangers Drowning. Just from this brief video, I already feel some fondness toward her. Based on this first impression, I like her.
If I still had a TikTok account, I would give video a like.
Her exegesis of Peter Singerâs parable of the drowning child is really, really good â quick, breezy, and straight to the point, in a way that should be the envy of any explainer. The only part that was a question mark for me was her use of the term âextreme utilitariansâ. Itâs not exactly inaccurate, though, and it does get the point across, so, now that Iâm thinking about it, I guess itâs actually fine. Come to think of it, if I were trying to explain this idea casually to a friend or an acquaintance or a general audience, I might use a similar phrase like âhardcore utilitariansâ or something.
It isnât a technical term, but she is referring to the extreme personal sacrifice some people will go through for their moral views, or people who take moral views to more of an extreme than the typical person will (even probably the typical utilitarian or the typical moral philosopher).
Her suspicion of the emotional motivations of people in EA who have pivoted from what tends to be more boring, humble, and sometimes gruelling work in global poverty to high-paying, sexy, glamorous, luxurious, fun, exciting work in AI safety is incredibly perceptive and just a really great point. I have said (and others have said) similar things in the past, and even so, the way she said it was so clear and perceptive that I feel I now better understand the point I was trying to make because she said it (and thought it) better. So, kudos to her on that.
I would say your instinct should not be to treat this as a PR or marketing or media problem, or to want to leap into the fray to provide a âcounternarrativeâ. I would say this is actually just perceptive, substantive, eloquently expressed criticism or skepticism. I think the appropriate response is to take it a substantive argument or point.
There are many things people in EA could do if they wanted to do more to establish the credibility of AI safety for a wider audience or for mainstream society. Doing vastly more academic publishing on the topic is one idea. People are right not to take seriously ideas only written on blogs, forums, Twitter, or in books that donât go through any more rigour or academic review than the previous three mediums. Science and academia provide a blueprint for how to establish mainstream credibility of obscure technical ideas.
Iâm sure there are other good ideas out there too. For example, why not get more curious about why AI safety critics, skeptics, and dissenters disagree? Why not figure out their arguments, engage deeply, and respond to them? This could be in informal mediums and not through academic publishing. I think it would be a meaningful step toward persuasion. Itâs kind of embarrassing for AI safety that itâs fairly easy for critics and skeptics to lob up plausible-sounding objections to the AI safety thesis/âworldview and there isnât really a convincing (to me, and to many others) response. Why not do the intellectual work, first, and focus on the PR/âmarketing later?
Something that would go a long way for me, personally, toward establishing at least a bit more good faith and credibility would be if AI safety advocates were willing to burn bad arguments that donât make sense. For instance, if an AI safety advocate were willing to concede the fundamental, glaring flaws in AI 2027 or Situational Awareness, I would personally be willing to listen to them more carefully and take them more seriously. On the other hand, if someone canât acknowledge that this is an atrocious, ridiculous graph, then I sort of feel like I can safely ignore what they say, since overall they havenât demonstrated to me a level of seriousness, credibility, or reasonableness that I would feel is needed if itâs going to be worthwhile for me to engage with their ideas.
Right now, whatever the best arguments in AI safety are, it feels like theyâre all lumped in with the worst arguments, and itâs hard for me not to judge it all based on the worst arguments. I imagine this will be a recurring problem if AI safety tries to gain more mainstream, widespread acceptance. If like 10% of people in EA were constantly talking about how great homeopathy is and is and how itâs curing all their ailments, and how foolish the medical and scientific establishment is for saying itâs just a placebo, would you be as willing to take EA arguments about pandemic risk seriously? Or would you just figure that this community doesnât know what itâs talking about? Thatâs the situation for me with AI safety, and Iâm sure others feel the same way, or would if they encountered AI safety ideas from an initial position of reasonable skepticism.
Those are just my first 2-3 ideas. Other people could probably brainstorm others. Overall, I think the intellectual work is lacking. More marketing/âPR work would either fail or deserve to fail (even if it succeeded), in my view, because the intellectual foundation isnât there yet.
I actually share a lot of your read here. I think it is actually a very strong explanation of Singerâs argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal.
My one quibble with the videoâs content is the âextreme utilitariansâ framing; as Iâm one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. Itâs a pretty minor issue though.
I think that the video is still worth providing a counter-narrative to though, and I think thatâs actually going to be my primary disagreement. For me, that counter-narrative isnât that EA is perfect, but that taking a principled EA mindset towards problems actually leads towards better solutions, and has lead to a lot of good being done in the world already.
The issue with the video, which I shouldâve been more explicit about in the original comment, is that when taken in the context of TikTok, it acts as a reinforcement to people who think that you canât try to make the world better. She presents a vision of EA where it initially tried to do good (while not mentioning any of the good it actually did, just the sacrifices that people made for it), and then that it was corrupted by people with impure intentions, and now no longer does.
Regardless of what you or I think of the AI safety movement, I think that the people who believe in it believe in it seriously, and got there primarily through reasoning from EA principles. It isnât a corruption of EA ideas of doing good, just a different way of accomplishing them, though we can (and should) disagree on how the weighting of these factors plays out. And it primarily hasnât supplanted the other ways that people within the movement are doing good, itâs supplemented them.
When the first exposure of EA ideas leads people towards the âthings canât be betterâ meme, thatâs something that I think is worth combatting. I donât think EA is perfect, but I think that thinking about and acting on EA principles really can help make the world better, and thatâs what an ideal simple EA counter-narrative would emphasize to me.
Okay. I actually watched the TikTok. That shoulda been step 1 â I committed the cardinal sin of commenting without watching. (My previous comment was more responding to the screenshotted comments, based on my past experience with leftist discourse on TikTok and Twitter.)
The TikTok is 100% correct. The creatorâs points and arguments are absolutely correct. Every factual claim she makes is correct. The video is extremely reasonable, fair-minded, and even-handed. The creator is eloquent, perceptive, and clearly very intelligent. She comes across as earnest, sincere, kind, open-minded, and well-meaning. I really liked her brief discussion of Strangers Drowning. Just from this brief video, I already feel some fondness toward her. Based on this first impression, I like her.
If I still had a TikTok account, I would give video a like.
Her exegesis of Peter Singerâs parable of the drowning child is really, really good â quick, breezy, and straight to the point, in a way that should be the envy of any explainer. The only part that was a question mark for me was her use of the term âextreme utilitariansâ. Itâs not exactly inaccurate, though, and it does get the point across, so, now that Iâm thinking about it, I guess itâs actually fine. Come to think of it, if I were trying to explain this idea casually to a friend or an acquaintance or a general audience, I might use a similar phrase like âhardcore utilitariansâ or something.
It isnât a technical term, but she is referring to the extreme personal sacrifice some people will go through for their moral views, or people who take moral views to more of an extreme than the typical person will (even probably the typical utilitarian or the typical moral philosopher).
Her suspicion of the emotional motivations of people in EA who have pivoted from what tends to be more boring, humble, and sometimes gruelling work in global poverty to high-paying, sexy, glamorous, luxurious, fun, exciting work in AI safety is incredibly perceptive and just a really great point. I have said (and others have said) similar things in the past, and even so, the way she said it was so clear and perceptive that I feel I now better understand the point I was trying to make because she said it (and thought it) better. So, kudos to her on that.
I would say your instinct should not be to treat this as a PR or marketing or media problem, or to want to leap into the fray to provide a âcounternarrativeâ. I would say this is actually just perceptive, substantive, eloquently expressed criticism or skepticism. I think the appropriate response is to take it a substantive argument or point.
There are many things people in EA could do if they wanted to do more to establish the credibility of AI safety for a wider audience or for mainstream society. Doing vastly more academic publishing on the topic is one idea. People are right not to take seriously ideas only written on blogs, forums, Twitter, or in books that donât go through any more rigour or academic review than the previous three mediums. Science and academia provide a blueprint for how to establish mainstream credibility of obscure technical ideas.
Iâm sure there are other good ideas out there too. For example, why not get more curious about why AI safety critics, skeptics, and dissenters disagree? Why not figure out their arguments, engage deeply, and respond to them? This could be in informal mediums and not through academic publishing. I think it would be a meaningful step toward persuasion. Itâs kind of embarrassing for AI safety that itâs fairly easy for critics and skeptics to lob up plausible-sounding objections to the AI safety thesis/âworldview and there isnât really a convincing (to me, and to many others) response. Why not do the intellectual work, first, and focus on the PR/âmarketing later?
Something that would go a long way for me, personally, toward establishing at least a bit more good faith and credibility would be if AI safety advocates were willing to burn bad arguments that donât make sense. For instance, if an AI safety advocate were willing to concede the fundamental, glaring flaws in AI 2027 or Situational Awareness, I would personally be willing to listen to them more carefully and take them more seriously. On the other hand, if someone canât acknowledge that this is an atrocious, ridiculous graph, then I sort of feel like I can safely ignore what they say, since overall they havenât demonstrated to me a level of seriousness, credibility, or reasonableness that I would feel is needed if itâs going to be worthwhile for me to engage with their ideas.
Right now, whatever the best arguments in AI safety are, it feels like theyâre all lumped in with the worst arguments, and itâs hard for me not to judge it all based on the worst arguments. I imagine this will be a recurring problem if AI safety tries to gain more mainstream, widespread acceptance. If like 10% of people in EA were constantly talking about how great homeopathy is and is and how itâs curing all their ailments, and how foolish the medical and scientific establishment is for saying itâs just a placebo, would you be as willing to take EA arguments about pandemic risk seriously? Or would you just figure that this community doesnât know what itâs talking about? Thatâs the situation for me with AI safety, and Iâm sure others feel the same way, or would if they encountered AI safety ideas from an initial position of reasonable skepticism.
Those are just my first 2-3 ideas. Other people could probably brainstorm others. Overall, I think the intellectual work is lacking. More marketing/âPR work would either fail or deserve to fail (even if it succeeded), in my view, because the intellectual foundation isnât there yet.
I actually share a lot of your read here. I think it is actually a very strong explanation of Singerâs argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal.
My one quibble with the videoâs content is the âextreme utilitariansâ framing; as Iâm one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. Itâs a pretty minor issue though.
I think that the video is still worth providing a counter-narrative to though, and I think thatâs actually going to be my primary disagreement. For me, that counter-narrative isnât that EA is perfect, but that taking a principled EA mindset towards problems actually leads towards better solutions, and has lead to a lot of good being done in the world already.
The issue with the video, which I shouldâve been more explicit about in the original comment, is that when taken in the context of TikTok, it acts as a reinforcement to people who think that you canât try to make the world better. She presents a vision of EA where it initially tried to do good (while not mentioning any of the good it actually did, just the sacrifices that people made for it), and then that it was corrupted by people with impure intentions, and now no longer does.
Regardless of what you or I think of the AI safety movement, I think that the people who believe in it believe in it seriously, and got there primarily through reasoning from EA principles. It isnât a corruption of EA ideas of doing good, just a different way of accomplishing them, though we can (and should) disagree on how the weighting of these factors plays out. And it primarily hasnât supplanted the other ways that people within the movement are doing good, itâs supplemented them.
When the first exposure of EA ideas leads people towards the âthings canât be betterâ meme, thatâs something that I think is worth combatting. I donât think EA is perfect, but I think that thinking about and acting on EA principles really can help make the world better, and thatâs what an ideal simple EA counter-narrative would emphasize to me.