I donāt know what Josh thinks the flaws are, but since I agree that this one is more flawed, I can speak a bit for myself at least. I think most of what I saw as flawed came from isolated moments, in particular criticisms the author raised that seemed to me like they had fairly clear counterpoints that the author didnāt bring up (other times he managed to do this quite well). A few that stand out to me, off the top of my head:
āCremer said, of Bankman-Fried, āNow everyone is in the Bahamas, and now all of a sudden we have to listen to three-hour podcasts with him, because heās the one with all the money. Heās good at crypto so he must be good at public policy . . . what?!āā
The 80,000 Hours podcast is about many things, but principally and originally it is about effective career paths. Earning to give is recommended less these days, but theyāve only had one other interview with someone who earned to give that I can recall, and SBF is by far the most successful example of the path to date. Another thing the podcast is about is the state of EA opportunities/āorganizations. Learning about the priorities of one of the biggest new forces in the field, like FTX, seems clearly worthwhile for that. The three hours point is also misleading to raise, since that is a very typical length for 80k episodes.
āLongtermism is invariably a phenomenon of its time: in the nineteen-seventies, sophisticated fans of āSoylent Greenā feared a population explosion; in the era of āThe Matrix,ā people are prone to agonize about A.I.ā
This point strikes me as very as hoc. AI is one of the oldest sci-fi tropes out there, and in order to find a recent particularly influential example they had to go back to a movie over 20 years old that looks almost nothing like the risks people worry about with AI today. Meanwhile the example of population explosion is also cherry picked to be a case of sci fi worry that seems misguided in retrospect. Why doesnāt he talk about the era of āDr. Strangeloveā and āWar Gamesā? And immediately after this,
āIn the week I spent in Oxford, I heard almost nothing about the month-old war in Ukraine. I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.ā
Some people take comfort in this probably, but generally those are people, like the author, who arenāt that viscerally worried about the risk. Others have very serious mental health problems from worrying about AI doom. Iāve had problems like this to some degree, others have had it so bad they have had to leave the movement entirely, and indeed criticize it in the complete opposite direction.
I am not saying that people who academically or performatively believe in AI risks, and can seek refuge in this, donāt exist. Iām also not saying the author had to do yet more research and turn up solid evidence that the picture he is giving is incomplete, but when you start describing people thinking everything and everyone they love may be destroyed soon as a comforting coping mechanism, I think you should bring at least a little skepticism to the table. It is possible that this just reflects the fact that you find a different real world problem emotionally devastating at the moment and thinking about this risk you donāt personally take seriously is a distraction for you, and you failed your empathy roll this time.
A deeper issue might be the lack of discussion of the talent constraint on many top cause areas in the context of controversies over spending on community building, which is arguably the key consideration much of the debate turns on. The increased spending on community building (which still isnāt even close to most of the spending) seems more uncomplicatedly bad if you miss this dimension.
Again though, this piece goes through a ton of points, mostly quite well, and canāt be expected to land perfectly everywhere, so Iām pretty willing to forgive problems like these when I run into them. They are just the sorts of things that made me think this was more flawed than the other pieces.
Curious to hear what you thought these were if you feel it worth your time to share.
I donāt know what Josh thinks the flaws are, but since I agree that this one is more flawed, I can speak a bit for myself at least. I think most of what I saw as flawed came from isolated moments, in particular criticisms the author raised that seemed to me like they had fairly clear counterpoints that the author didnāt bring up (other times he managed to do this quite well). A few that stand out to me, off the top of my head:
The 80,000 Hours podcast is about many things, but principally and originally it is about effective career paths. Earning to give is recommended less these days, but theyāve only had one other interview with someone who earned to give that I can recall, and SBF is by far the most successful example of the path to date. Another thing the podcast is about is the state of EA opportunities/āorganizations. Learning about the priorities of one of the biggest new forces in the field, like FTX, seems clearly worthwhile for that. The three hours point is also misleading to raise, since that is a very typical length for 80k episodes.
This point strikes me as very as hoc. AI is one of the oldest sci-fi tropes out there, and in order to find a recent particularly influential example they had to go back to a movie over 20 years old that looks almost nothing like the risks people worry about with AI today. Meanwhile the example of population explosion is also cherry picked to be a case of sci fi worry that seems misguided in retrospect. Why doesnāt he talk about the era of āDr. Strangeloveā and āWar Gamesā? And immediately after this,
Some people take comfort in this probably, but generally those are people, like the author, who arenāt that viscerally worried about the risk. Others have very serious mental health problems from worrying about AI doom. Iāve had problems like this to some degree, others have had it so bad they have had to leave the movement entirely, and indeed criticize it in the complete opposite direction.
I am not saying that people who academically or performatively believe in AI risks, and can seek refuge in this, donāt exist. Iām also not saying the author had to do yet more research and turn up solid evidence that the picture he is giving is incomplete, but when you start describing people thinking everything and everyone they love may be destroyed soon as a comforting coping mechanism, I think you should bring at least a little skepticism to the table. It is possible that this just reflects the fact that you find a different real world problem emotionally devastating at the moment and thinking about this risk you donāt personally take seriously is a distraction for you, and you failed your empathy roll this time.
A deeper issue might be the lack of discussion of the talent constraint on many top cause areas in the context of controversies over spending on community building, which is arguably the key consideration much of the debate turns on. The increased spending on community building (which still isnāt even close to most of the spending) seems more uncomplicatedly bad if you miss this dimension.
Again though, this piece goes through a ton of points, mostly quite well, and canāt be expected to land perfectly everywhere, so Iām pretty willing to forgive problems like these when I run into them. They are just the sorts of things that made me think this was more flawed than the other pieces.