I really liked this one. Between this, the New Yorker piece, and Dylan Matthews’ Vox one, there’s been an unusual amount of nuanced, high-quality coverage of EA in mainstream outlets lately imo.
Same here—Will MacAskill’s publicists are doing a great job getting EA in the public eye right as What We Owe the Future looms. (Speaking of which, the front page of this Sunday’s New York Times opinion section is The Case for Longtermism!)
On a slight tangent, as a university organizer, I’ve noticed that few college students have heard of EA at all (based on informal polling outside a dining hall, ~<10%). It’ll be interesting to see if/how all this contemporary coverage changes that.
2 of my acquaintances at uni who sorta knew I was involved in EA stuff but weren’t really that interested in it themselves have quite recently (~a week ago) reached out to ask about EA because they came across it elsewhere. My guess is that there are many more who’ll come across it and be curious but not necessarily connect the dot to engaging with their university student group.
Would be interesting to hear from people at universities that have the new academic year soon about if the EA coverage in the media changed anything!
I don’t know what Josh thinks the flaws are, but since I agree that this one is more flawed, I can speak a bit for myself at least. I think most of what I saw as flawed came from isolated moments, in particular criticisms the author raised that seemed to me like they had fairly clear counterpoints that the author didn’t bring up (other times he managed to do this quite well). A few that stand out to me, off the top of my head:
”Cremer said, of Bankman-Fried, ‘Now everyone is in the Bahamas, and now all of a sudden we have to listen to three-hour podcasts with him, because he’s the one with all the money. He’s good at crypto so he must be good at public policy . . . what?!’”
The 80,000 Hours podcast is about many things, but principally and originally it is about effective career paths. Earning to give is recommended less these days, but they’ve only had one other interview with someone who earned to give that I can recall, and SBF is by far the most successful example of the path to date. Another thing the podcast is about is the state of EA opportunities/organizations. Learning about the priorities of one of the biggest new forces in the field, like FTX, seems clearly worthwhile for that. The three hours point is also misleading to raise, since that is a very typical length for 80k episodes.
”Longtermism is invariably a phenomenon of its time: in the nineteen-seventies, sophisticated fans of ‘Soylent Green’ feared a population explosion; in the era of ‘The Matrix,’ people are prone to agonize about A.I.”
This point strikes me as very as hoc. AI is one of the oldest sci-fi tropes out there, and in order to find a recent particularly influential example they had to go back to a movie over 20 years old that looks almost nothing like the risks people worry about with AI today. Meanwhile the example of population explosion is also cherry picked to be a case of sci fi worry that seems misguided in retrospect. Why doesn’t he talk about the era of “Dr. Strangelove” and “War Games”? And immediately after this,
”In the week I spent in Oxford, I heard almost nothing about the month-old war in Ukraine. I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.”
Some people take comfort in this probably, but generally those are people, like the author, who aren’t that viscerally worried about the risk. Others have very serious mental health problems from worrying about AI doom. I’ve had problems like this to some degree, others have had it so bad they have had to leave the movement entirely, and indeed criticize it in the complete opposite direction.
I am not saying that people who academically or performatively believe in AI risks, and can seek refuge in this, don’t exist. I’m also not saying the author had to do yet more research and turn up solid evidence that the picture he is giving is incomplete, but when you start describing people thinking everything and everyone they love may be destroyed soon as a comforting coping mechanism, I think you should bring at least a little skepticism to the table. It is possible that this just reflects the fact that you find a different real world problem emotionally devastating at the moment and thinking about this risk you don’t personally take seriously is a distraction for you, and you failed your empathy roll this time.
A deeper issue might be the lack of discussion of the talent constraint on many top cause areas in the context of controversies over spending on community building, which is arguably the key consideration much of the debate turns on. The increased spending on community building (which still isn’t even close to most of the spending) seems more uncomplicatedly bad if you miss this dimension.
Again though, this piece goes through a ton of points, mostly quite well, and can’t be expected to land perfectly everywhere, so I’m pretty willing to forgive problems like these when I run into them. They are just the sorts of things that made me think this was more flawed than the other pieces.
I agree, but EA is a big messy fast-changing movement with lots of internal diversity, controversies, projects, ideas etc., that is pretty poorly known by the average person. This writer had to tease out a good, nuanced take on the movement, as far as I can tell, basically from scratch, which isn’t easy, and I think it shows that he put a ton of care, thought, and research into the task. The product wasn’t perfect, but I think it’s much better than the average explainer non-EAs, or frankly some EAs, would write on the topic.
Yeah I totally agree that the article was much better than many others on the subject, and that it isn’t an easy task. I just thought it was worth acknowledging the shortcomings as well.
I also think it was probably the most flawed of the three, but it also seemed like the most ambitious and packed with some of the most interesting information and narrative (plus by the person with the least prior familiarity) so I think I was unusually forgiving towards the flaws it did have.
I really liked this one. Between this, the New Yorker piece, and Dylan Matthews’ Vox one, there’s been an unusual amount of nuanced, high-quality coverage of EA in mainstream outlets lately imo.
Same here—Will MacAskill’s publicists are doing a great job getting EA in the public eye right as What We Owe the Future looms. (Speaking of which, the front page of this Sunday’s New York Times opinion section is The Case for Longtermism!)
On a slight tangent, as a university organizer, I’ve noticed that few college students have heard of EA at all (based on informal polling outside a dining hall, ~<10%). It’ll be interesting to see if/how all this contemporary coverage changes that.
2 of my acquaintances at uni who sorta knew I was involved in EA stuff but weren’t really that interested in it themselves have quite recently (~a week ago) reached out to ask about EA because they came across it elsewhere. My guess is that there are many more who’ll come across it and be curious but not necessarily connect the dot to engaging with their university student group.
Would be interesting to hear from people at universities that have the new academic year soon about if the EA coverage in the media changed anything!
There were plenty of shortcomings, I thought, in the New Yorker piece (the only one of the three I’ve read).
Curious to hear what you thought these were if you feel it worth your time to share.
I don’t know what Josh thinks the flaws are, but since I agree that this one is more flawed, I can speak a bit for myself at least. I think most of what I saw as flawed came from isolated moments, in particular criticisms the author raised that seemed to me like they had fairly clear counterpoints that the author didn’t bring up (other times he managed to do this quite well). A few that stand out to me, off the top of my head:
The 80,000 Hours podcast is about many things, but principally and originally it is about effective career paths. Earning to give is recommended less these days, but they’ve only had one other interview with someone who earned to give that I can recall, and SBF is by far the most successful example of the path to date. Another thing the podcast is about is the state of EA opportunities/organizations. Learning about the priorities of one of the biggest new forces in the field, like FTX, seems clearly worthwhile for that. The three hours point is also misleading to raise, since that is a very typical length for 80k episodes.
This point strikes me as very as hoc. AI is one of the oldest sci-fi tropes out there, and in order to find a recent particularly influential example they had to go back to a movie over 20 years old that looks almost nothing like the risks people worry about with AI today. Meanwhile the example of population explosion is also cherry picked to be a case of sci fi worry that seems misguided in retrospect. Why doesn’t he talk about the era of “Dr. Strangelove” and “War Games”? And immediately after this,
Some people take comfort in this probably, but generally those are people, like the author, who aren’t that viscerally worried about the risk. Others have very serious mental health problems from worrying about AI doom. I’ve had problems like this to some degree, others have had it so bad they have had to leave the movement entirely, and indeed criticize it in the complete opposite direction.
I am not saying that people who academically or performatively believe in AI risks, and can seek refuge in this, don’t exist. I’m also not saying the author had to do yet more research and turn up solid evidence that the picture he is giving is incomplete, but when you start describing people thinking everything and everyone they love may be destroyed soon as a comforting coping mechanism, I think you should bring at least a little skepticism to the table. It is possible that this just reflects the fact that you find a different real world problem emotionally devastating at the moment and thinking about this risk you don’t personally take seriously is a distraction for you, and you failed your empathy roll this time.
A deeper issue might be the lack of discussion of the talent constraint on many top cause areas in the context of controversies over spending on community building, which is arguably the key consideration much of the debate turns on. The increased spending on community building (which still isn’t even close to most of the spending) seems more uncomplicatedly bad if you miss this dimension.
Again though, this piece goes through a ton of points, mostly quite well, and can’t be expected to land perfectly everywhere, so I’m pretty willing to forgive problems like these when I run into them. They are just the sorts of things that made me think this was more flawed than the other pieces.
I agree, but EA is a big messy fast-changing movement with lots of internal diversity, controversies, projects, ideas etc., that is pretty poorly known by the average person. This writer had to tease out a good, nuanced take on the movement, as far as I can tell, basically from scratch, which isn’t easy, and I think it shows that he put a ton of care, thought, and research into the task. The product wasn’t perfect, but I think it’s much better than the average explainer non-EAs, or frankly some EAs, would write on the topic.
Yeah I totally agree that the article was much better than many others on the subject, and that it isn’t an easy task. I just thought it was worth acknowledging the shortcomings as well.
I also think it was probably the most flawed of the three, but it also seemed like the most ambitious and packed with some of the most interesting information and narrative (plus by the person with the least prior familiarity) so I think I was unusually forgiving towards the flaws it did have.
Fair.
Agreed! I’ve just read the articles you mentioned today and really liked them. Links:
vox article by Dylan Matthews: “How effective altruism went from a niche movement to a billion-dollar force – Effective altruism has gone mainstream. Where does that leave it?”
New Yorker article by Gideon Lewis-Kraus: “The Reluctant Prophet of Effective Altruism – William MacAskill’s movement set out to help the global poor. Now his followers fret about runaway A.I. Have they seen our threats clearly, or lost their way?”