Decoding the Gurus is a podcast in which an anthropologist and a psychologist critique popular guru-like figures (Jordan Peterson, Nassim N. Taleb, Brené Brown, Imbram X. Kendi, Sam Harris, etc.). I’ve listened to two or three previous episodes, and my general impression is that the hosts are too rambly/joking/jovial, and that the interpretations are harsh but fair. I find the description of their episode on Nassim N. Taleb to be fairly representative:
Taleb is a smart guy and quite fun to read and listen to. But he’s also an infinite singularity of arrogance and hyperbole. Matt and Chris can’t help but notice how convenient this pose is, when confronted with difficult-to-handle rebuttals.
Taleb is a fun mixed bag of solid and dubious claims. But it’s worth thinking about the degree to which those solid ideas were already well… solid. Many seem to have been known for decades even by all the ‘morons, frauds and assholes’ that Taleb hates.
To what degree does Taleb’s reputation rest on hyperbole and intuitive-sounding hot-takes?
A few weeks ago they released an episode about Eliezer Yudkowksy titled Eliezer Yudkowksy: AI is going to kill us all. I’m only partway through listening to it, but so far they have reasonable but not rock-solid critiques (such as noting how it is a red flag for someone to list off a variety of fields that they claim expertise in, or highlighting the behavior that lines up with a Cassandra complex).
The difficulty I have in issues like this parallels the difficulty I perceive in evaluating any other “end of the world” claim: the fact that many other individuals have been wrong about each of their own “end of the world” claims doesn’t really demonstrate that this one is wrong. It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
You’re right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I’m speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the “mix proteins to make diamondoid bacteria” scenario), then dismissal is a fairly understandable response.
If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens.
Decoding the Gurus is a podcast in which an anthropologist and a psychologist critique popular guru-like figures (Jordan Peterson, Nassim N. Taleb, Brené Brown, Imbram X. Kendi, Sam Harris, etc.). I’ve listened to two or three previous episodes, and my general impression is that the hosts are too rambly/joking/jovial, and that the interpretations are harsh but fair. I find the description of their episode on Nassim N. Taleb to be fairly representative:
A few weeks ago they released an episode about Eliezer Yudkowksy titled Eliezer Yudkowksy: AI is going to kill us all. I’m only partway through listening to it, but so far they have reasonable but not rock-solid critiques (such as noting how it is a red flag for someone to list off a variety of fields that they claim expertise in, or highlighting the behavior that lines up with a Cassandra complex).
The difficulty I have in issues like this parallels the difficulty I perceive in evaluating any other “end of the world” claim: the fact that many other individuals have been wrong about each of their own “end of the world” claims doesn’t really demonstrate that this one is wrong. It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
You’re right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I’m speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the “mix proteins to make diamondoid bacteria” scenario), then dismissal is a fairly understandable response.
If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens.