FWIW a bunch of the polemical elements were deliberate. My sense is something like: “All of these points are kinda well-known, but somehow people don’t… join the dots together? Like, they think of each of them as unfortunate accidents, when they actually demonstrate that the movement itself is deeply broken.”
There’s a kind of viewpoint flip from being like “yeah I keep hearing about individual cases that sure seem bad but probably they’ll do better next time” to “oh man, this is systemic”. And I don’t really know how to induce the viewpoint shift except by being kinda intense about it.
Upon reflection, I actually take this exchange to be an example of what I’m trying to address. Like, I gave a talk that was according to you “so extreme that it is hard to take seriously” and your three criticisms were:
An (admittedly embarrassing) terminological slip on NEPA.
A strawman of my point (I never said anyone was “single-handedly responsible”, that’s a much higher bar than “caused”—though I do in hindsight think that just saying “caused” without any qualifiers was sloppy of me).
A critique of an omission (on water/air pollution).
I imagine you have better criticisms to make, but ultimately (as you mention) we do agree on the core point, and so in some sense the message I’m getting is “yeah, listen, environmentalism has messed up a bunch of stuff really badly, but you’re not allowed to be mad about it”.
And I basically just disagree with that. I do think being mad about it (or maybe “outraged” is a better term) will have some negative effects on my personal epistemics (which I’m trying carefully to manage). But given the scale of the harms caused, this level of criticism seems like an acceptable and proportional discursive move. (Though note that I’d have done things differently if I felt like criticism that severe was already common within the political bubble of my audience—I think outrage is much worse when it bandwagons.)
EDIT: what do you mean by “how to get broad engagement on this”? Like, you don’t see how this could be interesting to a wider audience? You don’t know how to engage with it yourself? Something else?
I am in no way trying to imply that you shouldn’t be mad about environmentalism’s failings—in fact, I am mad about them on a daily basis. I think if being mad about environmentalism’s failing is the main point than what Ezra Klein and Derek Thompson are currently doing with Abundance is a good example of communicating many of your criticisms in a way optimized to land with those that need to hear it.
My point was merely that by framing the example in such extreme terms it will lose a lot of people despite being only very tangentially related to the main points you are trying to make. Maybe that’s okay, but it didn’t seem like your goal overall to make a point about environmentalism, so losing people on an example that is stated in such an extreme fashion did not seem worth it to me.
Ah, gotcha. Yepp, that’s a fair point, and worth me being more careful about in the future.
I do think we differ a bit on how disagreeable we think advocacy should be, though. For example, I recently retweeted this criticism of Abundance, which is basically saying that they overly optimized for it to land with those who hear it.
And in general I think it’s worth losing a bunch of listeners in order to convey things more deeply to the ones who remain (because if my own models of movement failure have been informed by environmentalism etc, it’s hard to talk around them).
But in this particular case, yeah, probably a bit of an own goal to include the environmentalism stuff so strongly in an AI talk.
I think that at a high level you make some good points. I also think it’s probably a good thing for some people who care about AI safety to appear to the current right as ideologically aligned with them.
At the same time, a lot of your framing matches incredibly well with what I see as current right-wing talking points.
“And in general I think it’s worth losing a bunch of listeners in order to convey things more deeply to the ones who remain” → This comes across as absurd to me. I’m all for some people holding uncomfortable or difficult positions. But when those positions sound exactly like the kind of thing that would gain favor by a certain party, I have a very tough time thinking that the author is simultaneously optimizing for “conveying things deeply”. Personally, I find a lot of the framing irrelevant, distracting, and problematic.
As an example, if I were talking to a right-wing audience, I wouldn’t focus on example of racism in the South, if equally-good examples in other domains would do. I’d expect that such work would get in the way of good discussion on the mutual areas where myself and the audience would more easily agree.
Honestly, I have had a decent hypothesis that you are consciously doing all of this just in order to gain favor by some people on the right. I could see a lot of arguments people could make sense for this. But that hypothesis makes more sense for the Twitter stuff than here. Either way, it does make it difficult for me to know how to engage. On one hand, I am very uncomfortable and I highly disagree with a lot of MAGA thinking, including some of the frames you reference (which seem to fit that vibe), if you do honestly believe this stuff. Or on the other, you’re actively lying about what you believe, in a critical topic, in an important set of public debates.
Anyway, this does feel like a pity of a situation. I think a lot of your work is quite good, and in theory, what read to me like the MAGA-aligned parts don’t need to get in the way. But I realize we live in an environment where that’s challenging.
(Also, if it’s not obvious, I do like a lot of right-wing thinkers. I think that the George Mason libertarians are quite great, for example. But I personally have had a lot of trouble with the MAGA thinkers as of late. My overall problem is much less with conservative thinking than it is MAGA thinking.)
a lot of your framing matches incredibly well with what I see as current right-wing talking points
Occam’s razor says that this is because I’m right-wing (in the MAGA sense not just the libertarian sense).
It seems like you’re downweighting this hypothesis primarily because you personally have so much trouble with MAGA thinkers, to the point where you struggle to understand why I’d sincerely hold this position. Would you say that’s a fair summary? If so hopefully some forthcoming writings of mine will help bridge this gap.
It seems like the other reason you’re downweighting that hypothesis is because my framing seems unnecessarily provocative. But consider that I’m not actually optimizing for the average extent to which my audience changes their mind. I’m optimizing for something closer to the peak extent to which audience members change their mind (because I generally think of intellectual productivity as being heavy-tailed). When you’re optimizing for that you may well do things like give a talk to a right-wing audience about racism in the south, because for each person there’s a small chance that this example changes their worldview a lot.
I’m open to the idea that this is an ineffective or counterproductive strategy, which is why I concede above that this one probably went a bit too far. But I don’t think it’s absurd by any means.
Insofar as I’m doing something I don’t reflectively endorse, I think it’s probably just being too contrarian because I enjoy being contrarian. But I am trying to decrease the extent to which I enjoy being contrarian in proportion to how much I decrease my fear of social judgment (because if you only have the latter then you end up too conformist) and that’s a somewhat slow process.
It seems like you’re downweighting this hypothesis primarily because you personally have so much trouble with MAGA thinkers, to the point where you struggle to understand why I’d sincerely hold this position. Would you say that’s a fair summary? If so hopefully some forthcoming writings of mine will help bridge this gap.
If you’re referring to the part where I said I wasn’t sure if you were faking it—I’d agree. From my standpoint, it seems like you’ve shifted to hold beliefs that both seem highly suspicious and highly convenient—this starts to raise the hypothesis that you’re doing it, at least partially, strategically.
(I relatedly think that a lot of similar posturing is happening on both sides of the political isle. But I generally expect that the politicians and power figures are primarily doing this for strategic interests, while the news consumers are much more likely to actually genuinely believe it. I’d suspect that others here would think similar of me, if it were the case that we had a hard-left administration, and I suddenly changed my tune to be very in line with that.)
I’m optimizing for something closer to the peak extent to which audience members change their mind (because I generally think of intellectual productivity as being heavy-tailed).
Again, this seems silly to me. For one thing, I think that while I don’t always trust people’s publicly-stated political viewpoints, I state their reasons for doing these sorts of things even less. I could imagine that your statement is what it honestly feels to you, but this just raises a bunch of alarm bells to me. Basically, if I’m trying to imagine someone coming up with a convincing reason to be highly and unnecessarily (from what I can tell) provocative, I’d expect them to raise some pretty wacky reasons for it. I’d guess that the answer is often simpler, like, “I find that trolling just brings with it more attention, and this is useful for me,” or “I like bringing in provocative beliefs that I have, wherever I can, even if it hurts an essay about a very different topic. I do this because I care a great deal about spreading these specific beliefs. One convenient thing here is that I get to sell readers on an essay about X, but really, I’ll use this as an opportunity to talk about Y instead.”
Here, I just don’t see how it helps. Maybe it attracts MAGA readers. But for the key points that aren’t MAGA-aligned, I’d expect that this would just get less genuine attention, not more. To me it sounds like the question is, “Does a MAGA veneer help make intellectual work more appealing to smart people?” And this clearly sounds to me as pretty out-there.
When you’re optimizing for that you may well do things like give a talk to a right-wing audience about racism in the south, because for each person there’s a small chance that this example changes their worldview a lot.
To be clear, my example wasn’t “I’m trying to talk to people in the south about racism” It’s more like, “I’m trying to talk to people in the south about animal welfare, and in doing so, I bring up examples around South people being racist.”
One could say, “But then, it’s a good thing that you bring up points about racism to those people. Because it’s actually more important that you teach those people about racism than it is animal welfare.”
But that would match my second point above; “I like bringing in provocative beliefs that I have”. This would sound like you’re trying to sneakily talk about racism, pretending to talk about animal welfare for some reason.
The most obvious thing is that if you care about animal welfare, and give a presentation to the deep US South, you can avoid examples that villainizes people in the South.
Insofar as I’m doing something I don’t reflectively endorse, I think it’s probably just being too contrarian because I enjoy being contrarian. But I am trying to decrease the extent to which I enjoy being contrarian in proportion to how much I decrease my fear of social judgment (because if you only have the latter then you end up too conformist) and that’s a somewhat slow process.
I liked this part of your statement and can sympathize. I think that us having strong contrarians around is very important, but also think that being a contrarian comes with a bunch of potential dangers. Doing it well seems incredibly difficult. This isn’t just an issue of “how to still contribute value to a community.” It’s also an issue of “not going personally insane, by chasing some feeling of uniqueness.” From what I’ve seen, disagreeableness is a very high-variance strategy, and if you’re not careful, it could go dramatically wrong.
Stepping back a bit—the main things that worry me here: 1. I think that disagreeable people often engage with patterns that are non-cooperative, like doing epistemic slight-of-hands and trolling people. I’m concerned that some of your work around this matches some of these patterns. 2. I’m nervous that you and/or others might slide into clearly-incorrect and dangerous MAGA worldviews. Typically the way for people to go crazy into any ideology is that they begin by testing the waters publicly with various statements. Very often, it seems like the conclusion of this is a place where they get really locked into the ideology. From here, it seems incredibly difficult to recover—for example, my guess is that Elon Musk has pissed off many non-MAGA folks, and at this point has very little way to go back without losing face. You writing using MAGA ideas both implies to me that you might be sliding this way, and worries me that you’ll be encouraging more people to go this route (which I personally have a lot of trouble with).
I think you’ve done some good work and hope you can do so in the forward. At the same time, I’m going to feel anxious about such work whenever I suspect that (1) and (2) might be happening.
To be clear, my example wasn’t “I’m trying to talk to people in the south about racism” It’s more like, “I’m trying to talk to people in the south about animal welfare, and in doing so, I bring up examples around South people being racist.”
Yeah I got that. Let me flesh out an analogy a little more:
Suppose you want to pitch people in the south about animal welfare. And you have a hypothesis for why people in the south don’t care much about animal welfare, which is that they tend to have smaller circles of moral concern than people in the north. Here are two types of example you could give:
You could give an example which fits with their existing worldview—like “having small circles of moral concern is what the north is doing when they’re dismissive of the south”. And then they’ll nod along and think to themselves “yeah, fuck the north” and slot what you’re saying into their minds as another piece of ammunition.
Or you could give an example that actively clashes with their worldview: “hey, I think you guys are making the same kind of mistake that a bunch of people in the south have historically made by being racist”. And then most people will bounce off, but a couple will be like “oh shit, that’s what it looks like like to have a surprisingly small circle of moral concern and not realize it”.
My claims:
Insofar as the people in the latter category have that realization, it will be to a significant extent because you used an example that was controversial to them, rather than one which already made sense to them.
People in AI safety are plenty good at saying phrases like “some AI safety interventions are net negative” and “unilateralist’s curse” and so on. But from my perspective there’s a missing step of… trying to deeply understand the forces that make movements net negative by their own values? Trying to synthesize actual lessons from a bunch of past fuck-ups?
I personally spent a long time being like “yeah I guess AI safety might have messed up big-time by leading to the founding of the AGI labs” but then not really doing anything differently. I only snapped out of complacency when I got to observe first-hand a bunch of the drama at OpenAI (which inspired this post). And so I have a hypothesis that it’s really valuable to have some experience where you’re like “oh shit, that’s what it looks like for something that seems really well-intentioned that everyone in my bubble is positive-ish about to make the world much worse”. That’s what I was trying to induce with my environmentalism slide (as best I can reconstruct, though note that the process by which I actually wrote it was much more intuitive and haphazard than the argument I’m making here).
I’m nervous that you and/or others might slide into clearly-incorrect and dangerous MAGA worldviews.
Yeah, that is a reasonable fear to have (which is part of why I’m engaging extensively here about meta-level considerations, so you can see that I’m not just running on reflexive tribalism).
Having said that, there’s something here reminiscent of I can tolerate anything except the outgroup. Intellectual tolerance isn’t for ideas you think are plausible—that’s just normal discussion. It’s for ideas you think are clearly incorrect, e.g. your epistemic outgroup. Of course you want to draw some lines for discussion of aliens or magic or whatever, but in this case it’s a memeplex endorsed (to some extent) by approximately half of America, so clearly within the bounds of things that are worth discussing. (You added “dangerous” too, but this is basically a general-purpose objection to any ideas which violate the existing consensus, so I don’t think that’s a good criterion for judging which speech to discourage.)
In other words, the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero. Now, I do think that in an ideal world I’d be doing this more carefully. E.g. I flagged in the transcript a factual claim that I later realized was mistaken, and there’s been various pushback on the graphs I’ve used, and the “caused climate change” thing was an overstatement, and so on. Being more cautious would help with your concerns about “epistemic slight-of-hands”. But for better or worse I am temperamentally a big-ideas thinker, and when I feel external pressure to make my work more careful that often kills my motivation to do it (which is true in AI safety too—I try to focus much more on first-principles reasoning than detailed analysis). In general I think people should discount my views somewhat because of this (and I give several related warnings in my talk) but I do think that’s pretty different from the hypothesis you mention that I’m being deliberately deceptive.
Thanks for continuing to engage! I really wasn’t expecting this to go so long. I appreciate that you are engaging on the meta-level, and also that you are keeping more controversial claims separate for now.
On the thought experiment of the people in the South, it sounds like we might well have some crux[1] here. I suspect it would be strained to discuss it much further. We’d need to get more and more detailed on the thought experiment, and my guess is that this would make up a much longer debate.
Some quick things:
“in this case it’s a memeplex endorsed (to some extent) by approximately half of America”
This is a sort of sentence I find frustrating. It feels very motte-and-bailey—like on one hand, I expect you to make a narrow point popular on some parts of MAGA Twitter/X, then on the other, I expect you to say, “Well, actually, Trump got 51% of the popular vote, so the important stuff is actually a majority opinion.”.
I’m pretty sure that very few specific points I would have a lot of trouble with are actually substantially endorsed by half of America. Sure, there are ways to phrase things very careful such that versions of them can technically be seen as being endorsed, but I get suspicious quickly.
The weasel phrases here of “to some extent” and “approximately”, and even the vague phrases “memeplex” and “endorsed” also strike me as very imprecise. As I think about it, I’m pretty sure I could claim that that sentence could hold, with a bit of clever reasoning, for almost every claim I could imagine someone making on either side.
In other words, the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero.
To be clear, I’m fine with someone straightforwardly writing good arguments in favor of much of MAGA[2]. One of my main issues with this piece is that it’s not claiming to be that, it feels like you’re trying to sneakily (intentionally or unintentionally) make this about MAGA.
I’m not sure what to make of the wording of “the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero.” I mean, to me, the more potentially inflammatory content is, more I’d want to make sure it’s written very carefully.
I could imagine a radical looting-promoting Marxist coming along, writing a trashy post in favor of their agenda here, then claiming “the optimal number of people raising and defending Marxism is not zero.”
This phrase seems to create a frame for discussion. Like, “There’s very little discussion about the topic/ideology X happening on the EA Forum now. Let’s round that to zero. Clearly, it seems intellectually close-minded to favor literally zero discussion on a topic. I’m going to do discussion on that topic. So if you’re in favor of intellectual activity, you must be in favor of what I’ll do.”
But for better or worse I am temperamentally a big-ideas thinker, and when I feel external pressure to make my work more careful that often kills my motivation to do it
I could appreciate that a lot of people would have motivation to do more public writing if they don’t need to be as careful when doing so. But of course, if someone makes claims that are misleading or wrong, and that does damage, the damage is still very much caused. In this case I think you hurt your own cause by tying these things together, and it’s also easy for me to imagine a world in which no one helped correct your work, and some people had takeaways of your points just being correct.
I assume one solution looks like being it clear you are uncertain/humble, to use disclaimers, to not say things too strongly, etc. I appreciate that you did some of this in the comments/responses (and some in the talk), but would prefer it if the original post, and related X/Twitter posts, were more in line with that.
I get the impression that a lot of people with strong ideologies on all spectrum make a bunch of points with very weak evidence, but tons of confidence. I really don’t like this pattern, I’d assume you generally wouldn’t either. The confidence to evidence disparity is the main issue, not the issue of having mediocre evidence alone. (If you’re adequately unsure of the evidence, even putting it in a talk like this demonstrates a level of confidence. If you’re really unsure of it, I’d expect it in footnotes or other short form posts maybe)
I do think that’s pretty different from the hypothesis you mention that I’m being deliberately deceptive.
That could be, and is good to know. I get the impression that lots of the MAGA (and the far Left) both frequently lie to get their ways on many issues, it’s hard to tell.
At the same time, I’d flag that it can be still very easy to accidentally get in the habit of using dark patterns in communication.
Anyhow—thanks again for being willing to go through this publicly. One reason I find this conversation interesting is because you’re willing to do it publicly and introspectively—I think most people who I hear making strong ideological claims don’t seem to be. It’s an uncomfortable topic to talk about. But I hope this sort of discussion could be useful by others, when dealing with other intellectuals in similar situations.
[1] By crux I just mean “key point where we have a strong disagreement”.
[2] The closer people get to explicitly defending Fascism, or say White Nationalism here, the more nervous I’ll be. I do think that many ideas within MAGA could be steelmanned safely, but it gets messy.
Thanks for the feedback!
FWIW a bunch of the polemical elements were deliberate. My sense is something like: “All of these points are kinda well-known, but somehow people don’t… join the dots together? Like, they think of each of them as unfortunate accidents, when they actually demonstrate that the movement itself is deeply broken.”
There’s a kind of viewpoint flip from being like “yeah I keep hearing about individual cases that sure seem bad but probably they’ll do better next time” to “oh man, this is systemic”. And I don’t really know how to induce the viewpoint shift except by being kinda intense about it.
Upon reflection, I actually take this exchange to be an example of what I’m trying to address. Like, I gave a talk that was according to you “so extreme that it is hard to take seriously” and your three criticisms were:
An (admittedly embarrassing) terminological slip on NEPA.
A strawman of my point (I never said anyone was “single-handedly responsible”, that’s a much higher bar than “caused”—though I do in hindsight think that just saying “caused” without any qualifiers was sloppy of me).
A critique of an omission (on water/air pollution).
I imagine you have better criticisms to make, but ultimately (as you mention) we do agree on the core point, and so in some sense the message I’m getting is “yeah, listen, environmentalism has messed up a bunch of stuff really badly, but you’re not allowed to be mad about it”.
And I basically just disagree with that. I do think being mad about it (or maybe “outraged” is a better term) will have some negative effects on my personal epistemics (which I’m trying carefully to manage). But given the scale of the harms caused, this level of criticism seems like an acceptable and proportional discursive move. (Though note that I’d have done things differently if I felt like criticism that severe was already common within the political bubble of my audience—I think outrage is much worse when it bandwagons.)
EDIT: what do you mean by “how to get broad engagement on this”? Like, you don’t see how this could be interesting to a wider audience? You don’t know how to engage with it yourself? Something else?
I think we are misunderstanding each other a bit.
I am in no way trying to imply that you shouldn’t be mad about environmentalism’s failings—in fact, I am mad about them on a daily basis. I think if being mad about environmentalism’s failing is the main point than what Ezra Klein and Derek Thompson are currently doing with Abundance is a good example of communicating many of your criticisms in a way optimized to land with those that need to hear it.
My point was merely that by framing the example in such extreme terms it will lose a lot of people despite being only very tangentially related to the main points you are trying to make. Maybe that’s okay, but it didn’t seem like your goal overall to make a point about environmentalism, so losing people on an example that is stated in such an extreme fashion did not seem worth it to me.
In fairness to Richard, I think it comes across in text a lot more strongly than in my view it came across listening on youtube
Ah, gotcha. Yepp, that’s a fair point, and worth me being more careful about in the future.
I do think we differ a bit on how disagreeable we think advocacy should be, though. For example, I recently retweeted this criticism of Abundance, which is basically saying that they overly optimized for it to land with those who hear it.
And in general I think it’s worth losing a bunch of listeners in order to convey things more deeply to the ones who remain (because if my own models of movement failure have been informed by environmentalism etc, it’s hard to talk around them).
But in this particular case, yeah, probably a bit of an own goal to include the environmentalism stuff so strongly in an AI talk.
My quick take:
I think that at a high level you make some good points. I also think it’s probably a good thing for some people who care about AI safety to appear to the current right as ideologically aligned with them.
At the same time, a lot of your framing matches incredibly well with what I see as current right-wing talking points.
“And in general I think it’s worth losing a bunch of listeners in order to convey things more deeply to the ones who remain”
→ This comes across as absurd to me. I’m all for some people holding uncomfortable or difficult positions. But when those positions sound exactly like the kind of thing that would gain favor by a certain party, I have a very tough time thinking that the author is simultaneously optimizing for “conveying things deeply”. Personally, I find a lot of the framing irrelevant, distracting, and problematic.
As an example, if I were talking to a right-wing audience, I wouldn’t focus on example of racism in the South, if equally-good examples in other domains would do. I’d expect that such work would get in the way of good discussion on the mutual areas where myself and the audience would more easily agree.
Honestly, I have had a decent hypothesis that you are consciously doing all of this just in order to gain favor by some people on the right. I could see a lot of arguments people could make sense for this. But that hypothesis makes more sense for the Twitter stuff than here. Either way, it does make it difficult for me to know how to engage. On one hand, I am very uncomfortable and I highly disagree with a lot of MAGA thinking, including some of the frames you reference (which seem to fit that vibe), if you do honestly believe this stuff. Or on the other, you’re actively lying about what you believe, in a critical topic, in an important set of public debates.
Anyway, this does feel like a pity of a situation. I think a lot of your work is quite good, and in theory, what read to me like the MAGA-aligned parts don’t need to get in the way. But I realize we live in an environment where that’s challenging.
(Also, if it’s not obvious, I do like a lot of right-wing thinkers. I think that the George Mason libertarians are quite great, for example. But I personally have had a lot of trouble with the MAGA thinkers as of late. My overall problem is much less with conservative thinking than it is MAGA thinking.)
Occam’s razor says that this is because I’m right-wing (in the MAGA sense not just the libertarian sense).
It seems like you’re downweighting this hypothesis primarily because you personally have so much trouble with MAGA thinkers, to the point where you struggle to understand why I’d sincerely hold this position. Would you say that’s a fair summary? If so hopefully some forthcoming writings of mine will help bridge this gap.
It seems like the other reason you’re downweighting that hypothesis is because my framing seems unnecessarily provocative. But consider that I’m not actually optimizing for the average extent to which my audience changes their mind. I’m optimizing for something closer to the peak extent to which audience members change their mind (because I generally think of intellectual productivity as being heavy-tailed). When you’re optimizing for that you may well do things like give a talk to a right-wing audience about racism in the south, because for each person there’s a small chance that this example changes their worldview a lot.
I’m open to the idea that this is an ineffective or counterproductive strategy, which is why I concede above that this one probably went a bit too far. But I don’t think it’s absurd by any means.
Insofar as I’m doing something I don’t reflectively endorse, I think it’s probably just being too contrarian because I enjoy being contrarian. But I am trying to decrease the extent to which I enjoy being contrarian in proportion to how much I decrease my fear of social judgment (because if you only have the latter then you end up too conformist) and that’s a somewhat slow process.
If you’re referring to the part where I said I wasn’t sure if you were faking it—I’d agree. From my standpoint, it seems like you’ve shifted to hold beliefs that both seem highly suspicious and highly convenient—this starts to raise the hypothesis that you’re doing it, at least partially, strategically.
(I relatedly think that a lot of similar posturing is happening on both sides of the political isle. But I generally expect that the politicians and power figures are primarily doing this for strategic interests, while the news consumers are much more likely to actually genuinely believe it. I’d suspect that others here would think similar of me, if it were the case that we had a hard-left administration, and I suddenly changed my tune to be very in line with that.)
Again, this seems silly to me. For one thing, I think that while I don’t always trust people’s publicly-stated political viewpoints, I state their reasons for doing these sorts of things even less. I could imagine that your statement is what it honestly feels to you, but this just raises a bunch of alarm bells to me. Basically, if I’m trying to imagine someone coming up with a convincing reason to be highly and unnecessarily (from what I can tell) provocative, I’d expect them to raise some pretty wacky reasons for it. I’d guess that the answer is often simpler, like, “I find that trolling just brings with it more attention, and this is useful for me,” or “I like bringing in provocative beliefs that I have, wherever I can, even if it hurts an essay about a very different topic. I do this because I care a great deal about spreading these specific beliefs. One convenient thing here is that I get to sell readers on an essay about X, but really, I’ll use this as an opportunity to talk about Y instead.”
Here, I just don’t see how it helps. Maybe it attracts MAGA readers. But for the key points that aren’t MAGA-aligned, I’d expect that this would just get less genuine attention, not more. To me it sounds like the question is, “Does a MAGA veneer help make intellectual work more appealing to smart people?” And this clearly sounds to me as pretty out-there.
To be clear, my example wasn’t “I’m trying to talk to people in the south about racism” It’s more like, “I’m trying to talk to people in the south about animal welfare, and in doing so, I bring up examples around South people being racist.”
One could say, “But then, it’s a good thing that you bring up points about racism to those people. Because it’s actually more important that you teach those people about racism than it is animal welfare.”
But that would match my second point above; “I like bringing in provocative beliefs that I have”. This would sound like you’re trying to sneakily talk about racism, pretending to talk about animal welfare for some reason.
The most obvious thing is that if you care about animal welfare, and give a presentation to the deep US South, you can avoid examples that villainizes people in the South.
I liked this part of your statement and can sympathize. I think that us having strong contrarians around is very important, but also think that being a contrarian comes with a bunch of potential dangers. Doing it well seems incredibly difficult. This isn’t just an issue of “how to still contribute value to a community.” It’s also an issue of “not going personally insane, by chasing some feeling of uniqueness.” From what I’ve seen, disagreeableness is a very high-variance strategy, and if you’re not careful, it could go dramatically wrong.
Stepping back a bit—the main things that worry me here:
1. I think that disagreeable people often engage with patterns that are non-cooperative, like doing epistemic slight-of-hands and trolling people. I’m concerned that some of your work around this matches some of these patterns.
2. I’m nervous that you and/or others might slide into clearly-incorrect and dangerous MAGA worldviews. Typically the way for people to go crazy into any ideology is that they begin by testing the waters publicly with various statements. Very often, it seems like the conclusion of this is a place where they get really locked into the ideology. From here, it seems incredibly difficult to recover—for example, my guess is that Elon Musk has pissed off many non-MAGA folks, and at this point has very little way to go back without losing face. You writing using MAGA ideas both implies to me that you might be sliding this way, and worries me that you’ll be encouraging more people to go this route (which I personally have a lot of trouble with).
I think you’ve done some good work and hope you can do so in the forward. At the same time, I’m going to feel anxious about such work whenever I suspect that (1) and (2) might be happening.
Yeah I got that. Let me flesh out an analogy a little more:
Suppose you want to pitch people in the south about animal welfare. And you have a hypothesis for why people in the south don’t care much about animal welfare, which is that they tend to have smaller circles of moral concern than people in the north. Here are two types of example you could give:
You could give an example which fits with their existing worldview—like “having small circles of moral concern is what the north is doing when they’re dismissive of the south”. And then they’ll nod along and think to themselves “yeah, fuck the north” and slot what you’re saying into their minds as another piece of ammunition.
Or you could give an example that actively clashes with their worldview: “hey, I think you guys are making the same kind of mistake that a bunch of people in the south have historically made by being racist”. And then most people will bounce off, but a couple will be like “oh shit, that’s what it looks like like to have a surprisingly small circle of moral concern and not realize it”.
My claims:
Insofar as the people in the latter category have that realization, it will be to a significant extent because you used an example that was controversial to them, rather than one which already made sense to them.
People in AI safety are plenty good at saying phrases like “some AI safety interventions are net negative” and “unilateralist’s curse” and so on. But from my perspective there’s a missing step of… trying to deeply understand the forces that make movements net negative by their own values? Trying to synthesize actual lessons from a bunch of past fuck-ups?
I personally spent a long time being like “yeah I guess AI safety might have messed up big-time by leading to the founding of the AGI labs” but then not really doing anything differently. I only snapped out of complacency when I got to observe first-hand a bunch of the drama at OpenAI (which inspired this post). And so I have a hypothesis that it’s really valuable to have some experience where you’re like “oh shit, that’s what it looks like for something that seems really well-intentioned that everyone in my bubble is positive-ish about to make the world much worse”. That’s what I was trying to induce with my environmentalism slide (as best I can reconstruct, though note that the process by which I actually wrote it was much more intuitive and haphazard than the argument I’m making here).
Yeah, that is a reasonable fear to have (which is part of why I’m engaging extensively here about meta-level considerations, so you can see that I’m not just running on reflexive tribalism).
Having said that, there’s something here reminiscent of I can tolerate anything except the outgroup. Intellectual tolerance isn’t for ideas you think are plausible—that’s just normal discussion. It’s for ideas you think are clearly incorrect, e.g. your epistemic outgroup. Of course you want to draw some lines for discussion of aliens or magic or whatever, but in this case it’s a memeplex endorsed (to some extent) by approximately half of America, so clearly within the bounds of things that are worth discussing. (You added “dangerous” too, but this is basically a general-purpose objection to any ideas which violate the existing consensus, so I don’t think that’s a good criterion for judging which speech to discourage.)
In other words, the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero. Now, I do think that in an ideal world I’d be doing this more carefully. E.g. I flagged in the transcript a factual claim that I later realized was mistaken, and there’s been various pushback on the graphs I’ve used, and the “caused climate change” thing was an overstatement, and so on. Being more cautious would help with your concerns about “epistemic slight-of-hands”. But for better or worse I am temperamentally a big-ideas thinker, and when I feel external pressure to make my work more careful that often kills my motivation to do it (which is true in AI safety too—I try to focus much more on first-principles reasoning than detailed analysis). In general I think people should discount my views somewhat because of this (and I give several related warnings in my talk) but I do think that’s pretty different from the hypothesis you mention that I’m being deliberately deceptive.
Thanks for continuing to engage! I really wasn’t expecting this to go so long. I appreciate that you are engaging on the meta-level, and also that you are keeping more controversial claims separate for now.
On the thought experiment of the people in the South, it sounds like we might well have some crux[1] here. I suspect it would be strained to discuss it much further. We’d need to get more and more detailed on the thought experiment, and my guess is that this would make up a much longer debate.
Some quick things:
This is a sort of sentence I find frustrating. It feels very motte-and-bailey—like on one hand, I expect you to make a narrow point popular on some parts of MAGA Twitter/X, then on the other, I expect you to say, “Well, actually, Trump got 51% of the popular vote, so the important stuff is actually a majority opinion.”.
I’m pretty sure that very few specific points I would have a lot of trouble with are actually substantially endorsed by half of America. Sure, there are ways to phrase things very careful such that versions of them can technically be seen as being endorsed, but I get suspicious quickly.
The weasel phrases here of “to some extent” and “approximately”, and even the vague phrases “memeplex” and “endorsed” also strike me as very imprecise. As I think about it, I’m pretty sure I could claim that that sentence could hold, with a bit of clever reasoning, for almost every claim I could imagine someone making on either side.
To be clear, I’m fine with someone straightforwardly writing good arguments in favor of much of MAGA[2]. One of my main issues with this piece is that it’s not claiming to be that, it feels like you’re trying to sneakily (intentionally or unintentionally) make this about MAGA.
I’m not sure what to make of the wording of “the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero.” I mean, to me, the more potentially inflammatory content is, more I’d want to make sure it’s written very carefully.
I could imagine a radical looting-promoting Marxist coming along, writing a trashy post in favor of their agenda here, then claiming “the optimal number of people raising and defending Marxism is not zero.”
This phrase seems to create a frame for discussion. Like, “There’s very little discussion about the topic/ideology X happening on the EA Forum now. Let’s round that to zero. Clearly, it seems intellectually close-minded to favor literally zero discussion on a topic. I’m going to do discussion on that topic. So if you’re in favor of intellectual activity, you must be in favor of what I’ll do.”
I could appreciate that a lot of people would have motivation to do more public writing if they don’t need to be as careful when doing so. But of course, if someone makes claims that are misleading or wrong, and that does damage, the damage is still very much caused. In this case I think you hurt your own cause by tying these things together, and it’s also easy for me to imagine a world in which no one helped correct your work, and some people had takeaways of your points just being correct.
I assume one solution looks like being it clear you are uncertain/humble, to use disclaimers, to not say things too strongly, etc. I appreciate that you did some of this in the comments/responses (and some in the talk), but would prefer it if the original post, and related X/Twitter posts, were more in line with that.
I get the impression that a lot of people with strong ideologies on all spectrum make a bunch of points with very weak evidence, but tons of confidence. I really don’t like this pattern, I’d assume you generally wouldn’t either. The confidence to evidence disparity is the main issue, not the issue of having mediocre evidence alone. (If you’re adequately unsure of the evidence, even putting it in a talk like this demonstrates a level of confidence. If you’re really unsure of it, I’d expect it in footnotes or other short form posts maybe)
That could be, and is good to know. I get the impression that lots of the MAGA (and the far Left) both frequently lie to get their ways on many issues, it’s hard to tell.
At the same time, I’d flag that it can be still very easy to accidentally get in the habit of using dark patterns in communication.
Anyhow—thanks again for being willing to go through this publicly. One reason I find this conversation interesting is because you’re willing to do it publicly and introspectively—I think most people who I hear making strong ideological claims don’t seem to be. It’s an uncomfortable topic to talk about. But I hope this sort of discussion could be useful by others, when dealing with other intellectuals in similar situations.
[1] By crux I just mean “key point where we have a strong disagreement”.
[2] The closer people get to explicitly defending Fascism, or say White Nationalism here, the more nervous I’ll be. I do think that many ideas within MAGA could be steelmanned safely, but it gets messy.