Does anyone else feel there’s a vaguely missing mood here?
At the outset, I think EAs unfortunately do need to be aware that a powerful person is claiming that they are the Antichrist. And so I think Ben’s post is a useful public service. I can also appreciate the reasons one might want to write such a post as “just as an explication of [Thiel’s] views” without critique. I don’t even disagree with those reasons.
And yet . . . if you put these same ideas into the mouth of a random person, I suspect the vast majority of the Forum readership and commentariat would dismiss them as ridiculous ramblings, the same way we would treat the speech of your average person holding forth about the end of days on an urban street corner. I question whether any of us would take Thiel’s attempts at theology (or much of anything else) seriously—or trying to massage it to make any sort of sense—if he were not a rich and powerful person.[1] To the extent that we’re analyzing what Thiel is selling with any degree of seriousness because of his wealth and influence rather than the merit of his ideas, does that pose any epistemic concerns?
To my (Christian) ears, this should be taken about as seriously as a major investor in the Coca-Cola Company spouting off that Pepsi is the work of Antichrist. Or that Obama was/is the Antichrist—sadly, this one was a fairly common view during his presidency. Even if one doesn’t care about the theological side of this, an individual’s claims that his ideological and/or financial opponents are somehow senior-ranking minions of Satan sounds like an fairly good reason to ordinarily be dismissive of that person’s message. It doesn’t exactly suggest that the odds of finding gold nuggets buried in the sludge are worth the trouble of the search.
For what it’s worth, I am also confident that if you presented the idea that AI-cautious people were the Antichrist without attributing it to Thiel to a large group of ordinary Christians in the pews, or to a group of seminary professors, the near-universal response would range from puzzlement to hysterical laughter. So the fact that the EA audience is disproportionately secular isn’t doing all the work here.
Hearing him talk about Effective Altruists brought to mind this paragraph from SlateStarCodex:
One is reminded of the old joke about the Nazi papers. The rabbi catches an old Jewish man reading the Nazi newspaper and demands to know how he could look at such garbage. The man answers “When I read our Jewish newpapers, the news is so depressing – oppression, death, genocide! But here, everything is great! We control the banks, we control the media. Why, just yesterday they said we had a plan to kick the Gentiles out of Germany entirely!”
I was somewhat pleasantly surprised to learn that one of the people who has been a major investor in AI companies and a major political intellectual influence toward tech and scientific acceleration believes that “the scary, dystopian AI narrative is way more compelling” and of “the Effective Altruist people” says “I think this time around they are winning the arguments”.
Winning the arguments is the primary mechanism by which I wish to change the world.
Yeah. Frankly of all the criticisms of EA that might be easily be turned into something more substantial, accurate and useful with a little bit of reframing, a liberalism-hating surveillance-tech investor dressing his fundamental loathing of its principles and opposition to the limits it might impose on tech he actively promotes in pretentious pseudo-Christian allusion seems least likely to add any value. [1]
Doesn’t take much searching of the forum to find outsider criticisms of aspects of the AI safety movement which are a little less oblique than comparing it with the Antichrist, written by people without conflicts of interest who’ve probably never written anything as dumb as this, most of which seem to get less sympathetic treatment.
and I say that as someone more in agreement with the selected Thiel pronouncements on how impactful and risky near-term AI is likely to be than the average EA
And yet . . . if you put these same ideas into the mouth of a random person, I suspect the vast majority of the Forum readership and commentariat would dismiss them as ridiculous ramblings, the same way we would treat the speech of your average person holding forth about the end of days on an urban street corner.
I think this is a reasonable objection to make in general—I made similar objections in a similar case here.
But I think your argument that Peter hasn’t done anything to earn any epistemic credit is mistaken:
To the extent that we’re analyzing what Thiel is selling with any degree of seriousness because of his wealth and influence rather than the merit of his ideas, does that pose any epistemic concerns? To my (Christian) ears, this should be taken about as seriously as a major investor in the Coca-Cola Company spouting off that Pepsi is the work of Antichrist
This seems quite dis-analogous to me. Peter has made his money largely by making a small number of investments that have done extraordinarily well. Skill at this involves understanding leaders and teams, future technological developments, economics and other fields. It’s always possible to get lucky, but his degree of success provides I think significant evidence of skill. In contrast, over the last 25 years Coca-Cola has significantly underperformed the S&P500, so your hypothetical Pepsi critic does not have the same standing.
I think that the theology is largely a distraction from the reason this is attracting sympathy, which I’d guess to be more like:
If you have some ideas which are pretty good, or even very good, but they present as though they’re the answer needed for everything, and they’re not, that could be quite destructive (and potentially very-net-bad, even while the ideas were originally obviously-good)
This is at least a plausible failure mode for EA, and correspondingly worth some attention/wariness
This kind of concern hasn’t gotten much airtime before (and is perhaps easier to express and understand as a serious possibility with some of the language-that-I-interpret-metaphorically);
Feels like the argument you’ve constructed is a better one than the one Thiel is actually making, which seems to be a very standard “evil actors often claim to be working for the greater good” argument with a libertarian gloss. Thiel doesn’t think redistribution is an obviously good idea that might backfire if it’s treated as too important, he actively loathes it.
I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser’s curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion.
But I don’t think an guy that doubts we’re on the verge of an AI singularity and couldn’t care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.
I mean, he is also committed to the somewhat less insane-sounding “growth is good even if it comes with risks” argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.
Ok thanks I think it’s fair to call me on this (I realise the question of what Thiel actually thinks is not super interesting to me, compared to “does this critique contain inspiration for things to be aware of that I wasn’t previously really tracking”; but get that most people probably aren’t orienting similarly, and I was kind of assuming that they were when I suggested this was why it was getting sympathy).
I do think though that there’s a more nuanced point here than “trying too hard to do good can result in harm”. It’s more like “over-claiming about how to do good can result in harm”. For a caricature to make the point cleanly: suppose EA really just promoted bednets, and basically told everyone that what it meant to be good was to give more money to bednets. I think it’s easy to see how this gaining a lot of memetic influence (bednet cults; big bednet, etc.) could end up being destructive (even if bednets are great).
I think that EA is at least conceivably vulnerable to more subtle versions of the same mistake. And that that is worth being vigilant against. (Note this is only really a mistake that comes up for ideas that are so self-recommending that they lead to something like strategic movement-building around the ideas.)
Does anyone else feel there’s a vaguely missing mood here?
At the outset, I think EAs unfortunately do need to be aware that a powerful person is claiming that they are the Antichrist. And so I think Ben’s post is a useful public service. I can also appreciate the reasons one might want to write such a post as “just as an explication of [Thiel’s] views” without critique. I don’t even disagree with those reasons.
And yet . . . if you put these same ideas into the mouth of a random person, I suspect the vast majority of the Forum readership and commentariat would dismiss them as ridiculous ramblings, the same way we would treat the speech of your average person holding forth about the end of days on an urban street corner. I question whether any of us would take Thiel’s attempts at theology (or much of anything else) seriously—or trying to massage it to make any sort of sense—if he were not a rich and powerful person.[1] To the extent that we’re analyzing what Thiel is selling with any degree of seriousness because of his wealth and influence rather than the merit of his ideas, does that pose any epistemic concerns?
To my (Christian) ears, this should be taken about as seriously as a major investor in the Coca-Cola Company spouting off that Pepsi is the work of Antichrist. Or that Obama was/is the Antichrist—sadly, this one was a fairly common view during his presidency. Even if one doesn’t care about the theological side of this, an individual’s claims that his ideological and/or financial opponents are somehow senior-ranking minions of Satan sounds like an fairly good reason to ordinarily be dismissive of that person’s message. It doesn’t exactly suggest that the odds of finding gold nuggets buried in the sludge are worth the trouble of the search.
For what it’s worth, I am also confident that if you presented the idea that AI-cautious people were the Antichrist without attributing it to Thiel to a large group of ordinary Christians in the pews, or to a group of seminary professors, the near-universal response would range from puzzlement to hysterical laughter. So the fact that the EA audience is disproportionately secular isn’t doing all the work here.
In a similar article on LessWrong, Ben Pace says the following, which resonates with me:
Yeah. Frankly of all the criticisms of EA that might be easily be turned into something more substantial, accurate and useful with a little bit of reframing, a liberalism-hating surveillance-tech investor dressing his fundamental loathing of its principles and opposition to the limits it might impose on tech he actively promotes in pretentious pseudo-Christian allusion seems least likely to add any value. [1]
Doesn’t take much searching of the forum to find outsider criticisms of aspects of the AI safety movement which are a little less oblique than comparing it with the Antichrist, written by people without conflicts of interest who’ve probably never written anything as dumb as this, most of which seem to get less sympathetic treatment.
and I say that as someone more in agreement with the selected Thiel pronouncements on how impactful and risky near-term AI is likely to be than the average EA
I think this is a reasonable objection to make in general—I made similar objections in a similar case here.
But I think your argument that Peter hasn’t done anything to earn any epistemic credit is mistaken:
This seems quite dis-analogous to me. Peter has made his money largely by making a small number of investments that have done extraordinarily well. Skill at this involves understanding leaders and teams, future technological developments, economics and other fields. It’s always possible to get lucky, but his degree of success provides I think significant evidence of skill. In contrast, over the last 25 years Coca-Cola has significantly underperformed the S&P500, so your hypothetical Pepsi critic does not have the same standing.
I think that the theology is largely a distraction from the reason this is attracting sympathy, which I’d guess to be more like:
If you have some ideas which are pretty good, or even very good, but they present as though they’re the answer needed for everything, and they’re not, that could be quite destructive (and potentially very-net-bad, even while the ideas were originally obviously-good)
This is at least a plausible failure mode for EA, and correspondingly worth some attention/wariness
This kind of concern hasn’t gotten much airtime before (and is perhaps easier to express and understand as a serious possibility with some of the language-that-I-interpret-metaphorically);
Feels like the argument you’ve constructed is a better one than the one Thiel is actually making, which seems to be a very standard “evil actors often claim to be working for the greater good” argument with a libertarian gloss. Thiel doesn’t think redistribution is an obviously good idea that might backfire if it’s treated as too important, he actively loathes it.
I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser’s curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion.
But I don’t think an guy that doubts we’re on the verge of an AI singularity and couldn’t care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.
I mean, he is also committed to the somewhat less insane-sounding “growth is good even if it comes with risks” argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.
Ok thanks I think it’s fair to call me on this (I realise the question of what Thiel actually thinks is not super interesting to me, compared to “does this critique contain inspiration for things to be aware of that I wasn’t previously really tracking”; but get that most people probably aren’t orienting similarly, and I was kind of assuming that they were when I suggested this was why it was getting sympathy).
I do think though that there’s a more nuanced point here than “trying too hard to do good can result in harm”. It’s more like “over-claiming about how to do good can result in harm”. For a caricature to make the point cleanly: suppose EA really just promoted bednets, and basically told everyone that what it meant to be good was to give more money to bednets. I think it’s easy to see how this gaining a lot of memetic influence (bednet cults; big bednet, etc.) could end up being destructive (even if bednets are great).
I think that EA is at least conceivably vulnerable to more subtle versions of the same mistake. And that that is worth being vigilant against. (Note this is only really a mistake that comes up for ideas that are so self-recommending that they lead to something like strategic movement-building around the ideas.)