I deeply enjoy your blog. I often grow frustrated with critiques of Effective Altruism for what I perceive as lacking rigor, charitability, and offered alternatives. This is very different from your blog. I think your blog hits a great balance in that I feel like you genuinely engage with the ideas from a well-intended perspective, yet do not hesitate to be critical and cutting when you feel like an issue is not well justified in EA discourse.
I particularly enjoyed the AI risk series and Exaggerating the risks series. I take this to be the areas where, if EA erred, it would be most impactful to spot it early and react, given the amount of funding and talent going into risk mitigation. I would love to read more content on regression to the inscrutable, which I found very insightful. I would also love to read more of your engagement with AI papers and articles.
I’d be interested in whether you or others have favorite critiques of EA that aim for a similar kind of engagement.
The honest truth is that prestige hierarchies get in the way of many people writing good critiques of EA. For any X (=feminism, marxism, non-consequentialism, …) there’s much more glory in writing a paper about X than a paper about X’s implications for EA, so really the only way to get a good sense of what any particular X implies for EA is to learn a lot about X. That’s frustrating, because EAs genuinely want to know what X implies for EA, but don’t have years to learn.
Some publications (The good it promises volume; my blog) aim to bridge the gap, but there are also some decent papers if you’re willing to read full papers. Papers like the Pettigrew, Heikkinen, and Curran papers in the GPI working paper series are worth reading, and GPI’s forthcoming longtermism volume will have many others.
In the meantime … I share your frustration. It’s just very hard to convince people to sit down and spend a few years learning about EA before they write critiques of it (just like it’s very hard to convince EAs to spend a few years learning about some specific X just to see what X might imply for EA). I’m not entirely sure how we will bridge this gap, but I hope we do.
I’ll try to write more on the regression to the inscrutable and on AI papers. Any particular papers you want to hear about?
Not that my vote counts for a lot, but I think it would be worthwhile for EA-aligned or alignedish sources to fund distillation / simplication of thoughtful criticism that is currently in a too-technical or other form that is hard to access for many people who would benefit from reading it. That seems like pretty low-hanging fruit, and making extant work more accessible doesn’t really implicate some of the potential concerns and challenges of commissioning new criticism. I wasn’t immediately able to find the papers you referenced on mobile, but my vague recollection of other GPI working papers is that accessibility could be a challenge for a bright but very generalist reader.
To operationalize: I don’t have the money to fairly fund a paper’s author—or an advanced grad student—distilling a technical paper into something at the level of several blog posts like the ones on your blog. But it’s the kind of thing I’d personally be willing to fund if there were enough people willing to share in the cost (definition of “enough” is dependent on financial specifics).
Thank you for the recommendations. To be honest, the parts of The good it promises that I read struck me as very low quality and significantly worse than the average EA critique. The authors did not seem to me to engage in good-faith critique, and I found a fair amount of their claims and proposed alternatives outlandish and unconvincing. I also found many of the arguments to be relying on buzzwords rather than actual arguments, which made the book feel a bit like a vicious twitter thread. I read only about half of the book; maybe I focused on the wrong parts.
I will check the GPI working paper series for alternative critiques. Thank you for recommending them.
On another note, I recently heard an interesting good-faith critique of EA called “But is it altruism?” by Peruzzi&Calderon. It is not published yet, but when it comes out, I could send it to you—it may be an interesting critique to dissect on the blog.
Again, thanks for your work on this blog. It’s really appreciated, and it is impressive you are able to spend so much time thoughtfully reflecting on EA on this blog while being a full-time academic.
I deeply enjoy your blog. I often grow frustrated with critiques of Effective Altruism for what I perceive as lacking rigor, charitability, and offered alternatives. This is very different from your blog. I think your blog hits a great balance in that I feel like you genuinely engage with the ideas from a well-intended perspective, yet do not hesitate to be critical and cutting when you feel like an issue is not well justified in EA discourse.
I particularly enjoyed the AI risk series and Exaggerating the risks series. I take this to be the areas where, if EA erred, it would be most impactful to spot it early and react, given the amount of funding and talent going into risk mitigation. I would love to read more content on regression to the inscrutable, which I found very insightful. I would also love to read more of your engagement with AI papers and articles.
I’d be interested in whether you or others have favorite critiques of EA that aim for a similar kind of engagement.
Thanks mhendric! I appreciate the kind words.
The honest truth is that prestige hierarchies get in the way of many people writing good critiques of EA. For any X (=feminism, marxism, non-consequentialism, …) there’s much more glory in writing a paper about X than a paper about X’s implications for EA, so really the only way to get a good sense of what any particular X implies for EA is to learn a lot about X. That’s frustrating, because EAs genuinely want to know what X implies for EA, but don’t have years to learn.
Some publications (The good it promises volume; my blog) aim to bridge the gap, but there are also some decent papers if you’re willing to read full papers. Papers like the Pettigrew, Heikkinen, and Curran papers in the GPI working paper series are worth reading, and GPI’s forthcoming longtermism volume will have many others.
In the meantime … I share your frustration. It’s just very hard to convince people to sit down and spend a few years learning about EA before they write critiques of it (just like it’s very hard to convince EAs to spend a few years learning about some specific X just to see what X might imply for EA). I’m not entirely sure how we will bridge this gap, but I hope we do.
I’ll try to write more on the regression to the inscrutable and on AI papers. Any particular papers you want to hear about?
Not that my vote counts for a lot, but I think it would be worthwhile for EA-aligned or alignedish sources to fund distillation / simplication of thoughtful criticism that is currently in a too-technical or other form that is hard to access for many people who would benefit from reading it. That seems like pretty low-hanging fruit, and making extant work more accessible doesn’t really implicate some of the potential concerns and challenges of commissioning new criticism. I wasn’t immediately able to find the papers you referenced on mobile, but my vague recollection of other GPI working papers is that accessibility could be a challenge for a bright but very generalist reader.
To operationalize: I don’t have the money to fairly fund a paper’s author—or an advanced grad student—distilling a technical paper into something at the level of several blog posts like the ones on your blog. But it’s the kind of thing I’d personally be willing to fund if there were enough people willing to share in the cost (definition of “enough” is dependent on financial specifics).
Thank you for the recommendations. To be honest, the parts of The good it promises that I read struck me as very low quality and significantly worse than the average EA critique. The authors did not seem to me to engage in good-faith critique, and I found a fair amount of their claims and proposed alternatives outlandish and unconvincing. I also found many of the arguments to be relying on buzzwords rather than actual arguments, which made the book feel a bit like a vicious twitter thread. I read only about half of the book; maybe I focused on the wrong parts.
I will check the GPI working paper series for alternative critiques. Thank you for recommending them.
Two AI papers I’d be particularly interested to see you engage with are
”Concrete Problems in AI Safety”
and
“The alignment problem from a deep learning perspective”
On another note, I recently heard an interesting good-faith critique of EA called “But is it altruism?” by Peruzzi&Calderon. It is not published yet, but when it comes out, I could send it to you—it may be an interesting critique to dissect on the blog.
Again, thanks for your work on this blog. It’s really appreciated, and it is impressive you are able to spend so much time thoughtfully reflecting on EA on this blog while being a full-time academic.
Thanks mhenric! Those are both good papers to consider and I’ll do my best to address them.
I didn’t know the “But is it altruism” paper. Please do send it when it is out—I’d like to read it any hopefully write about it.