The stuff about academic incentives makes it sound like there’s some “commonsensical” alternative to longtermism out there that philosophers are burying in order to be more “interesting”, and that just isn’t true. There’s literally no possible way to systematize ethics without ending up somewhere puzzling.
This seems importantly strawmanny. Matthews’ point (which I strongly agree with, fwiw) is an outside view one—something like ‘there are strong financial and reputational incentives for (EA) academics to reach “interesting” conclusions requiring more research’ and thus, by what I take as its extension, that whatever the ‘true importance’ of such concerns is, we should expect it to be systemically overstated by those academics.
It is hardly a counterpoint to this for anyone (especially an academic!) to say ‘ah, but those interesting conclusions are of true importance!’ - any more than it would be to hear (say) super wealthy people arguing for lower taxation on the grounds that it encourages productivity. The arguments/inside view aren’t necessarily wrong, but they just doesn’t really interact with the outside view, and finding a good epistemic balance is very hard.
To date, as far as I’m aware, the EA movement has been entirely focused on the inside view arguments, totally ignoring the incentives Matthews observes. As interested as I personally am in utilitarian philosophy, it’s very unclear to me whether any of the puzzles you mention have any practical relevance to doing good in the current world, or whether more research would make it any clearer. And in addition to the worries about population ethics, there’s a whole bunch of EA-adjacent research programmes that we could completely ignore (and have taken no practical action on to date), which nonetheless get significant funding that might counterfactually have gone to mosquito nets, GCR-prevention, etc:
Doomsday argument reasoning
Simulation argument reasoning
Wild animal suffering
Infinitarian ethics
Moral uncertainty
Cluelessness
Research into obscure decision theories*
* (less sure about this one. Maybe MIRI have done something with it behind closed doors, but if so I don’t believe they’ve communicated it)
On top of those examples, Will has openly advocated the importance of ‘keeping EA weird’.
So I think this is an issue that deserves a lot more scrutiny (presumably, ironically, most of which would come from academic EAs).
Distinguish two critiques in this general vicinity:
(1) Longtermism seems weird because its main proponents are philosophers who have professional incentives to make “interesting”/extreme claims regardless of their truth or plausibility.
(2) Academics are likely to “systematically overstate” the importance of their own research, so we shouldn’t take their claims about “true importance” at face value.
These are two very different critiques! Matthews clearly said (1), and that’s what I was responding to. His explanatory claim is demonstrably false. Your critique (2) seems right to me, though a trivial generalization of the broader claim:
(2*) Everyone is likely to systematically overstate the importance of their own work, so we shouldn’t take their claims about the true importance of their work at face value.
I agree that we need to critically evaluate claims that someone’s work is important. There’s nothing special about academic work in this respect, though.
I agree that we need to critically evaluate claims that someone’s work is important. There’s nothing special about academic work in this respect, though.
Strong disagree with this part. Academics, in the sense of ‘people who are paid to do specialised research’ are substantially more incentivised to overstate their value than a) people who aren’t paid, or b) people who are paid to do more superficial/multi-focus research (eg consultants), and who could therefore pivot easily if it turned out some project they were on was low value.
It sounds like you’re talking about researchers outside of academia. Academics aren’t paid directly for their research, and the objective “importance” of our research counts for literally nothing in tenure and promotion decisions, compared to more mundane metrics like how many papers we’ve published and in what venues, and whether it is deemed suitably impressive (by disciplinary standards, which again have zero connection to objective importance) by senior evaluators within the discipline.
A tenured academic, like a supreme court justice, has a job for life which leaves them far less vulnerable to incentives than almost anyone else.
This seems importantly strawmanny. Matthews’ point (which I strongly agree with, fwiw) is an outside view one—something like ‘there are strong financial and reputational incentives for (EA) academics to reach “interesting” conclusions requiring more research’ and thus, by what I take as its extension, that whatever the ‘true importance’ of such concerns is, we should expect it to be systemically overstated by those academics.
It is hardly a counterpoint to this for anyone (especially an academic!) to say ‘ah, but those interesting conclusions are of true importance!’ - any more than it would be to hear (say) super wealthy people arguing for lower taxation on the grounds that it encourages productivity. The arguments/inside view aren’t necessarily wrong, but they just doesn’t really interact with the outside view, and finding a good epistemic balance is very hard.
To date, as far as I’m aware, the EA movement has been entirely focused on the inside view arguments, totally ignoring the incentives Matthews observes. As interested as I personally am in utilitarian philosophy, it’s very unclear to me whether any of the puzzles you mention have any practical relevance to doing good in the current world, or whether more research would make it any clearer. And in addition to the worries about population ethics, there’s a whole bunch of EA-adjacent research programmes that we could completely ignore (and have taken no practical action on to date), which nonetheless get significant funding that might counterfactually have gone to mosquito nets, GCR-prevention, etc:
Doomsday argument reasoning
Simulation argument reasoning
Wild animal suffering
Infinitarian ethics
Moral uncertainty
Cluelessness
Research into obscure decision theories*
* (less sure about this one. Maybe MIRI have done something with it behind closed doors, but if so I don’t believe they’ve communicated it)
On top of those examples, Will has openly advocated the importance of ‘keeping EA weird’.
So I think this is an issue that deserves a lot more scrutiny (presumably, ironically, most of which would come from academic EAs).
Distinguish two critiques in this general vicinity:
(1) Longtermism seems weird because its main proponents are philosophers who have professional incentives to make “interesting”/extreme claims regardless of their truth or plausibility.
(2) Academics are likely to “systematically overstate” the importance of their own research, so we shouldn’t take their claims about “true importance” at face value.
These are two very different critiques! Matthews clearly said (1), and that’s what I was responding to. His explanatory claim is demonstrably false. Your critique (2) seems right to me, though a trivial generalization of the broader claim:
(2*) Everyone is likely to systematically overstate the importance of their own work, so we shouldn’t take their claims about the true importance of their work at face value.
I agree that we need to critically evaluate claims that someone’s work is important. There’s nothing special about academic work in this respect, though.
Strong disagree with this part. Academics, in the sense of ‘people who are paid to do specialised research’ are substantially more incentivised to overstate their value than a) people who aren’t paid, or b) people who are paid to do more superficial/multi-focus research (eg consultants), and who could therefore pivot easily if it turned out some project they were on was low value.
It sounds like you’re talking about researchers outside of academia. Academics aren’t paid directly for their research, and the objective “importance” of our research counts for literally nothing in tenure and promotion decisions, compared to more mundane metrics like how many papers we’ve published and in what venues, and whether it is deemed suitably impressive (by disciplinary standards, which again have zero connection to objective importance) by senior evaluators within the discipline.
A tenured academic, like a supreme court justice, has a job for life which leaves them far less vulnerable to incentives than almost anyone else.
Why was this downvoted?