Feels like the argument you’ve constructed is a better one than the one Thiel is actually making, which seems to be a very standard “evil actors often claim to be working for the greater good” argument with a libertarian gloss. Thiel doesn’t think redistribution is an obviously good idea that might backfire if it’s treated as too important, he actively loathes it.
I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser’s curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion.
But I don’t think an guy that doubts we’re on the verge of an AI singularity and couldn’t care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.
I mean, he is also committed to the somewhat less insane-sounding “growth is good even if it comes with risks” argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.
Ok thanks I think it’s fair to call me on this (I realise the question of what Thiel actually thinks is not super interesting to me, compared to “does this critique contain inspiration for things to be aware of that I wasn’t previously really tracking”; but get that most people probably aren’t orienting similarly, and I was kind of assuming that they were when I suggested this was why it was getting sympathy).
I do think though that there’s a more nuanced point here than “trying too hard to do good can result in harm”. It’s more like “over-claiming about how to do good can result in harm”. For a caricature to make the point cleanly: suppose EA really just promoted bednets, and basically told everyone that what it meant to be good was to give more money to bednets. I think it’s easy to see how this gaining a lot of memetic influence (bednet cults; big bednet, etc.) could end up being destructive (even if bednets are great).
I think that EA is at least conceivably vulnerable to more subtle versions of the same mistake. And that that is worth being vigilant against. (Note this is only really a mistake that comes up for ideas that are so self-recommending that they lead to something like strategic movement-building around the ideas.)
Feels like the argument you’ve constructed is a better one than the one Thiel is actually making, which seems to be a very standard “evil actors often claim to be working for the greater good” argument with a libertarian gloss. Thiel doesn’t think redistribution is an obviously good idea that might backfire if it’s treated as too important, he actively loathes it.
I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser’s curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion.
But I don’t think an guy that doubts we’re on the verge of an AI singularity and couldn’t care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.
I mean, he is also committed to the somewhat less insane-sounding “growth is good even if it comes with risks” argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.
Ok thanks I think it’s fair to call me on this (I realise the question of what Thiel actually thinks is not super interesting to me, compared to “does this critique contain inspiration for things to be aware of that I wasn’t previously really tracking”; but get that most people probably aren’t orienting similarly, and I was kind of assuming that they were when I suggested this was why it was getting sympathy).
I do think though that there’s a more nuanced point here than “trying too hard to do good can result in harm”. It’s more like “over-claiming about how to do good can result in harm”. For a caricature to make the point cleanly: suppose EA really just promoted bednets, and basically told everyone that what it meant to be good was to give more money to bednets. I think it’s easy to see how this gaining a lot of memetic influence (bednet cults; big bednet, etc.) could end up being destructive (even if bednets are great).
I think that EA is at least conceivably vulnerable to more subtle versions of the same mistake. And that that is worth being vigilant against. (Note this is only really a mistake that comes up for ideas that are so self-recommending that they lead to something like strategic movement-building around the ideas.)