The stuff about academic incentives makes it sound like thereâs some âcommonsensicalâ alternative to longtermism out there that philosophers are burying in order to be more âinterestingâ, and that just isnât true. Thereâs literally no possible way to systematize ethics without ending up somewhere puzzling.
Iâve written elsewhere about the importance of distinguishing ethical theory and practice. This is a completely standard part of the consequentialist philosophical tradition. So again, I sort of agree with some of what Matthews says here, except for the philosophy-blaming part of it.
I also donât see any evidence for the claim of EA philosophers having âeroded the boundary between this kind of philosophizing and real-world decision-makingâ. That would presumably require a critique of EA funding priorities (esp. by the Future Fund, as directed by Will and Nick), but he instead seems to allow that actual funding decisions have been well-grounded (at least âin most casesâ), and merely recommends âmore clearly statingâ that this is so. That seems to give the game away that his critique here is purely about optics and communications, and not the âreal-world decision-makingâ at all.
Finally, on SBFâs lack of guard-rails: yes, he made crazy bad decisions. There is no philosophical view on which he made wise decisions. He didnât maximize happiness. (Bentham would be rolling in his grave right now, if he had a grave.) So the worries about maximizing the wrong thing are completely irrelevant here. The problem was a total lack of practical wisdom or prudence.
People talk as if⊠at the moment when some man feels tempted to meddle with the property or life of another, he had to begin considering for the first time whether murder and theft are injurious to human happiness. Even then I do not think that he would find the question very puzzlingâŠ
There is no difficulty in proving any ethical standard whatever to work ill, if we suppose universal idiocy to be conjoined with it; but on any hypothesis short of that, mankind must by this time have acquired positive beliefs as to the effects of some actions on their happiness; and the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better.
I also donât see any evidence for the claim of EA philosophers having âeroded the boundary between this kind of philosophizing and real-world decision-makingâ.
Have you visited the 80,000 Hours website recently?
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad weâre attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at bestâsometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yesâdiverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 betsâto the point of blameworthy, perhaps criminal negligence.
A notable exception to the âweâre mostly cluelessâ situation is: catastrophes are bad. This view passes the âcommon senseâ test, and the ânearly all the reasonable takes on moral philosophyâ test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking âcatastrophes are badâ seriously enough. So, EAâalong with other groups and individualsâhas a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).
(Derek Parfitâs âextinction is much worse than 99.9% wipeoutâ claim is far more questionableâI put some of my chips on this, but not the majority.)
As you suggest, the transform function from âabstract philosophical ideaâ to âwhat doâ is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a âphysics and philosophyâ sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.
Iâm glad you shared the J.S. Mill quote.
âŠthe beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better
EAs should not be encouraged to grant themselves practical exception from âthe rules of morality for the multitudeâ if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasnât one of them).
To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of âphysics and philosophyâ folks who should not be made kings, because their âneed for systematisationâ is so dominant as to be a disastrous impediment for that role.
My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top âinfluencersâ, and many of the âsecond tierâ, are not.
(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)
Distinguish: (i) philosophically-informed ethical practice, vs (ii) âerod[ing] the boundary between [fantastical thought experiments] and real-world decision-makingâ
I think that (i) is straightforwardly good, central to EA, and a key component of what makes EA distinctively good. You seem to be asserting that (ii) is a common problem within EA, and Iâm wondering what the evidence for this is. I donât see anyone advocating for implementing the repugnant conclusion in real life, for example.
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad weâre attempting this, but we must recognise that this is an extraordinarily risky business.
I think this is conflating distinct ideas. The ârisky businessâ is simply real-world decision-making. There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasnât until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]
Philosophers think about tricky edge cases which others tend to ignore, but unless youâve some evidence that thinking about the edge cases makes us worse at responding to central casesâand again, Iâm still waiting for evidence of thisâthen it seems to me that youâre inventing associations where none exist in reality.
EAs should not be encouraged to grant themselves practical exception from âthe rules of morality for the multitudeâ if they think of themselves as philosophers.
Of course. The end of the Mill quote is just flagging that traditional social norms are not beyond revision. We may have good grounds for critiquing the anti-gay sexual morality of our ancestors, for example, and so reject such outmoded norms (for everyone, not just ourselves) when we have truly âsucceeded in finding betterâ.
there is a notable minority of âphysics and philosophyâ folks who should not be made kings, because their âneed for systematisationâ is so dominant as to be a disastrous impediment for that role.
Do you take yourself to be disagreeing with me here? (Me: âPeople shouldnât be kingsâ. You: âsystematizing philosophers shouldnât be kings!â You realize that my claim entails yours, right?) Iâm finding a lot of this exchange somewhat frustrating, because we seem to be talking past each other, and in a way where you seem to be implicitly attributing to me views or positions that Iâve already explicitly disavowed.
My sense is that we probably agree about which concrete things are bad, you perhaps have the false belief that I disagree with you on that, but actually the only disagreement is about whether philosophy tells us to do the things we both agree are bad (I say it doesnât). But if that doesnât match your sense of the dialectic, maybe you can clarify what it is that you take us to disagree about?
[12/â15: Edited to tone down an intemperate sentence.]
There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasnât until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]
I strongly disagree with this. The key reason is: most of the time, norms that have been exposed to evolutionary selection pressures beat explicit ârational reflectionâ by individual humans. One of the major mistakes of Enlightenment philosophers was to think it is usually the other way around. These mistakes were plausibly a necessary condition for some of the horrific violence thatâs taken place since they started trending.
I often run into philosophy graduates who tell me that relying on intuitive moral judgements about particular cases is âarrogantâ. I reply by asking âwhere do these intuitions come from?â The metaphysical realists say âthey are truths of reason, underwritten by the non-natural essence of rationality itselfâ. The naturalists say: âthese intuitions were transmitted to you via culture and genetics, itself subject to aeons of evolutionary pressureâ. I side with the naturalists, despite all the best arguments for non-naturalism (to my mind, theyâre mostly bad!).
One way to think about the 21st century predicament is that we usually learn via trial and error and selection pressures, but this dynamic in a world with modern technology seems unlikely to go well.
it wasnât until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.
I agree that philosophers, especially Derek Parfit, Nick Bostrom and Tyler Cowen*, have helped get this up the agenda. So too have many economists, astronomers, futurists, etc. Philosophers donât have a monopoly on identifying what matters in practiceâin fact theyâre usually pretty bad at this.
Same thing goes if we look at social movements instead of individuals: the anti-nuclear bomb and environmental folks may have done more for getting catastrophic risk up the agenda than effective altruism has so farâespecially in terms of generating a widespread culture concern and sense of unease, which certainly warmed up the audience for Bostrom, Parfit, and so on.
Effective altruism movement is only just getting started (hopefully), and it has achieved remarkable successes already. So I do think weâre on track to play a critical role, and we have Bostrom and Parfit and Ord and Sidgwick and Cowen to thank for thatâalong with many, many others.
*Those who donât see Tyler Cowen as fundamentally a philosopherâperhaps one of the greats, certainly better than Parfit (with whom he collaborated early on)âare not following carefully.
Iâm not going to respond to the âshow me the evidenceâ requests for now because Iâm short on time and itâs hard to do this well. Also: I think you and most readers can probably identify a bunch of evidence in favour of these takes if you take a while to look.
Iâm sorry to hear youâre finding this frustrating. Personally Iâm enjoying our exchange because itâs giving me a reason to clarify and write down a bunch of things Iâve been thinking about for a long time, and Iâm interested to hear what you and others make of them.
On Twitter I suggested we arrange a time to call. Would you be up for this? If yes, send me a DM.
Thereâs literally no possible way to systematize ethics without ending up somewhere puzzling.
Central plank of this perspective: systematizing ethics may not be the best idea, but some kinds of folks have a hard time recognising this. Systematising has its merits but if you find ideological mess hard to tolerate, you shouldnât be a king.
The stuff about academic incentives makes it sound like thereâs some âcommonsensicalâ alternative to longtermism out there that philosophers are burying in order to be more âinterestingâ, and that just isnât true. Thereâs literally no possible way to systematize ethics without ending up somewhere puzzling.
This seems importantly strawmanny. Matthewsâ point (which I strongly agree with, fwiw) is an outside view oneâsomething like âthere are strong financial and reputational incentives for (EA) academics to reach âinterestingâ conclusions requiring more researchâ and thus, by what I take as its extension, that whatever the âtrue importanceâ of such concerns is, we should expect it to be systemically overstated by those academics.
It is hardly a counterpoint to this for anyone (especially an academic!) to say âah, but those interesting conclusions are of true importance!â - any more than it would be to hear (say) super wealthy people arguing for lower taxation on the grounds that it encourages productivity. The arguments/âinside view arenât necessarily wrong, but they just doesnât really interact with the outside view, and finding a good epistemic balance is very hard.
To date, as far as Iâm aware, the EA movement has been entirely focused on the inside view arguments, totally ignoring the incentives Matthews observes. As interested as I personally am in utilitarian philosophy, itâs very unclear to me whether any of the puzzles you mention have any practical relevance to doing good in the current world, or whether more research would make it any clearer. And in addition to the worries about population ethics, thereâs a whole bunch of EA-adjacent research programmes that we could completely ignore (and have taken no practical action on to date), which nonetheless get significant funding that might counterfactually have gone to mosquito nets, GCR-prevention, etc:
Doomsday argument reasoning
Simulation argument reasoning
Wild animal suffering
Infinitarian ethics
Moral uncertainty
Cluelessness
Research into obscure decision theories*
* (less sure about this one. Maybe MIRI have done something with it behind closed doors, but if so I donât believe theyâve communicated it)
On top of those examples, Will has openly advocated the importance of âkeeping EA weirdâ.
So I think this is an issue that deserves a lot more scrutiny (presumably, ironically, most of which would come from academic EAs).
Distinguish two critiques in this general vicinity:
(1) Longtermism seems weird because its main proponents are philosophers who have professional incentives to make âinterestingâ/âextreme claims regardless of their truth or plausibility.
(2) Academics are likely to âsystematically overstateâ the importance of their own research, so we shouldnât take their claims about âtrue importanceâ at face value.
These are two very different critiques! Matthews clearly said (1), and thatâs what I was responding to. His explanatory claim is demonstrably false. Your critique (2) seems right to me, though a trivial generalization of the broader claim:
(2*) Everyone is likely to systematically overstate the importance of their own work, so we shouldnât take their claims about the true importance of their work at face value.
I agree that we need to critically evaluate claims that someoneâs work is important. Thereâs nothing special about academic work in this respect, though.
I agree that we need to critically evaluate claims that someoneâs work is important. Thereâs nothing special about academic work in this respect, though.
Strong disagree with this part. Academics, in the sense of âpeople who are paid to do specialised researchâ are substantially more incentivised to overstate their value than a) people who arenât paid, or b) people who are paid to do more superficial/âmulti-focus research (eg consultants), and who could therefore pivot easily if it turned out some project they were on was low value.
It sounds like youâre talking about researchers outside of academia. Academics arenât paid directly for their research, and the objective âimportanceâ of our research counts for literally nothing in tenure and promotion decisions, compared to more mundane metrics like how many papers weâve published and in what venues, and whether it is deemed suitably impressive (by disciplinary standards, which again have zero connection to objective importance) by senior evaluators within the discipline.
A tenured academic, like a supreme court justice, has a job for life which leaves them far less vulnerable to incentives than almost anyone else.
The stuff about academic incentives makes it sound like thereâs some âcommonsensicalâ alternative to longtermism out there that philosophers are burying in order to be more âinterestingâ, and that just isnât true. Thereâs literally no possible way to systematize ethics without ending up somewhere puzzling.
Iâve written elsewhere about the importance of distinguishing ethical theory and practice. This is a completely standard part of the consequentialist philosophical tradition. So again, I sort of agree with some of what Matthews says here, except for the philosophy-blaming part of it.
I also donât see any evidence for the claim of EA philosophers having âeroded the boundary between this kind of philosophizing and real-world decision-makingâ. That would presumably require a critique of EA funding priorities (esp. by the Future Fund, as directed by Will and Nick), but he instead seems to allow that actual funding decisions have been well-grounded (at least âin most casesâ), and merely recommends âmore clearly statingâ that this is so. That seems to give the game away that his critique here is purely about optics and communications, and not the âreal-world decision-makingâ at all.
Finally, on SBFâs lack of guard-rails: yes, he made crazy bad decisions. There is no philosophical view on which he made wise decisions. He didnât maximize happiness. (Bentham would be rolling in his grave right now, if he had a grave.) So the worries about maximizing the wrong thing are completely irrelevant here. The problem was a total lack of practical wisdom or prudence.
As J.S. Mill put it:
Have you visited the 80,000 Hours website recently?
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad weâre attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at bestâsometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yesâdiverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 betsâto the point of blameworthy, perhaps criminal negligence.
A notable exception to the âweâre mostly cluelessâ situation is: catastrophes are bad. This view passes the âcommon senseâ test, and the ânearly all the reasonable takes on moral philosophyâ test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking âcatastrophes are badâ seriously enough. So, EAâalong with other groups and individualsâhas a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).
(Derek Parfitâs âextinction is much worse than 99.9% wipeoutâ claim is far more questionableâI put some of my chips on this, but not the majority.)
As you suggest, the transform function from âabstract philosophical ideaâ to âwhat doâ is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a âphysics and philosophyâ sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.
Iâm glad you shared the J.S. Mill quote.
EAs should not be encouraged to grant themselves practical exception from âthe rules of morality for the multitudeâ if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasnât one of them).
To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of âphysics and philosophyâ folks who should not be made kings, because their âneed for systematisationâ is so dominant as to be a disastrous impediment for that role.
In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.
My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top âinfluencersâ, and many of the âsecond tierâ, are not.
(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)
Distinguish:
(i) philosophically-informed ethical practice, vs
(ii) âerod[ing] the boundary between [fantastical thought experiments] and real-world decision-makingâ
I think that (i) is straightforwardly good, central to EA, and a key component of what makes EA distinctively good. You seem to be asserting that (ii) is a common problem within EA, and Iâm wondering what the evidence for this is. I donât see anyone advocating for implementing the repugnant conclusion in real life, for example.
I think this is conflating distinct ideas. The ârisky businessâ is simply real-world decision-making. There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasnât until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]
Philosophers think about tricky edge cases which others tend to ignore, but unless youâve some evidence that thinking about the edge cases makes us worse at responding to central casesâand again, Iâm still waiting for evidence of thisâthen it seems to me that youâre inventing associations where none exist in reality.
Of course. The end of the Mill quote is just flagging that traditional social norms are not beyond revision. We may have good grounds for critiquing the anti-gay sexual morality of our ancestors, for example, and so reject such outmoded norms (for everyone, not just ourselves) when we have truly âsucceeded in finding betterâ.
Do you take yourself to be disagreeing with me here? (Me: âPeople shouldnât be kingsâ. You: âsystematizing philosophers shouldnât be kings!â You realize that my claim entails yours, right?) Iâm finding a lot of this exchange somewhat frustrating, because we seem to be talking past each other, and in a way where you seem to be implicitly attributing to me views or positions that Iâve already explicitly disavowed.
My sense is that we probably agree about which concrete things are bad, you perhaps have the false belief that I disagree with you on that, but actually the only disagreement is about whether philosophy tells us to do the things we both agree are bad (I say it doesnât). But if that doesnât match your sense of the dialectic, maybe you can clarify what it is that you take us to disagree about?
[12/â15: Edited to tone down an intemperate sentence.]
I strongly disagree with this. The key reason is: most of the time, norms that have been exposed to evolutionary selection pressures beat explicit ârational reflectionâ by individual humans. One of the major mistakes of Enlightenment philosophers was to think it is usually the other way around. These mistakes were plausibly a necessary condition for some of the horrific violence thatâs taken place since they started trending.
I often run into philosophy graduates who tell me that relying on intuitive moral judgements about particular cases is âarrogantâ. I reply by asking âwhere do these intuitions come from?â The metaphysical realists say âthey are truths of reason, underwritten by the non-natural essence of rationality itselfâ. The naturalists say: âthese intuitions were transmitted to you via culture and genetics, itself subject to aeons of evolutionary pressureâ. I side with the naturalists, despite all the best arguments for non-naturalism (to my mind, theyâre mostly bad!).
One way to think about the 21st century predicament is that we usually learn via trial and error and selection pressures, but this dynamic in a world with modern technology seems unlikely to go well.
I agree that philosophers, especially Derek Parfit, Nick Bostrom and Tyler Cowen*, have helped get this up the agenda. So too have many economists, astronomers, futurists, etc. Philosophers donât have a monopoly on identifying what matters in practiceâin fact theyâre usually pretty bad at this.
Same thing goes if we look at social movements instead of individuals: the anti-nuclear bomb and environmental folks may have done more for getting catastrophic risk up the agenda than effective altruism has so farâespecially in terms of generating a widespread culture concern and sense of unease, which certainly warmed up the audience for Bostrom, Parfit, and so on.
Effective altruism movement is only just getting started (hopefully), and it has achieved remarkable successes already. So I do think weâre on track to play a critical role, and we have Bostrom and Parfit and Ord and Sidgwick and Cowen to thank for thatâalong with many, many others.
*Those who donât see Tyler Cowen as fundamentally a philosopherâperhaps one of the greats, certainly better than Parfit (with whom he collaborated early on)âare not following carefully.
Iâm not going to respond to the âshow me the evidenceâ requests for now because Iâm short on time and itâs hard to do this well. Also: I think you and most readers can probably identify a bunch of evidence in favour of these takes if you take a while to look.
Iâm sorry to hear youâre finding this frustrating. Personally Iâm enjoying our exchange because itâs giving me a reason to clarify and write down a bunch of things Iâve been thinking about for a long time, and Iâm interested to hear what you and others make of them.
On Twitter I suggested we arrange a time to call. Would you be up for this? If yes, send me a DM.
Central plank of this perspective: systematizing ethics may not be the best idea, but some kinds of folks have a hard time recognising this. Systematising has its merits but if you find ideological mess hard to tolerate, you shouldnât be a king.
Related reading:
Karnofsky on worldview diversification.
Karnofsky on sequence vs cluster thinking.
Possibly the most underrated criticism of EA on the EA Forum.
Thereâs also Nick Beckstead disavowing his earlier âhardcore utilitarianismâ in favour of something like Tyler Cowenâs two thirds utilitarianism.
I myself am a moral anti-realist, so I donât care much about these debates, though itâs perpetually interesting to see debates on morality.
This seems importantly strawmanny. Matthewsâ point (which I strongly agree with, fwiw) is an outside view oneâsomething like âthere are strong financial and reputational incentives for (EA) academics to reach âinterestingâ conclusions requiring more researchâ and thus, by what I take as its extension, that whatever the âtrue importanceâ of such concerns is, we should expect it to be systemically overstated by those academics.
It is hardly a counterpoint to this for anyone (especially an academic!) to say âah, but those interesting conclusions are of true importance!â - any more than it would be to hear (say) super wealthy people arguing for lower taxation on the grounds that it encourages productivity. The arguments/âinside view arenât necessarily wrong, but they just doesnât really interact with the outside view, and finding a good epistemic balance is very hard.
To date, as far as Iâm aware, the EA movement has been entirely focused on the inside view arguments, totally ignoring the incentives Matthews observes. As interested as I personally am in utilitarian philosophy, itâs very unclear to me whether any of the puzzles you mention have any practical relevance to doing good in the current world, or whether more research would make it any clearer. And in addition to the worries about population ethics, thereâs a whole bunch of EA-adjacent research programmes that we could completely ignore (and have taken no practical action on to date), which nonetheless get significant funding that might counterfactually have gone to mosquito nets, GCR-prevention, etc:
Doomsday argument reasoning
Simulation argument reasoning
Wild animal suffering
Infinitarian ethics
Moral uncertainty
Cluelessness
Research into obscure decision theories*
* (less sure about this one. Maybe MIRI have done something with it behind closed doors, but if so I donât believe theyâve communicated it)
On top of those examples, Will has openly advocated the importance of âkeeping EA weirdâ.
So I think this is an issue that deserves a lot more scrutiny (presumably, ironically, most of which would come from academic EAs).
Distinguish two critiques in this general vicinity:
(1) Longtermism seems weird because its main proponents are philosophers who have professional incentives to make âinterestingâ/âextreme claims regardless of their truth or plausibility.
(2) Academics are likely to âsystematically overstateâ the importance of their own research, so we shouldnât take their claims about âtrue importanceâ at face value.
These are two very different critiques! Matthews clearly said (1), and thatâs what I was responding to. His explanatory claim is demonstrably false. Your critique (2) seems right to me, though a trivial generalization of the broader claim:
(2*) Everyone is likely to systematically overstate the importance of their own work, so we shouldnât take their claims about the true importance of their work at face value.
I agree that we need to critically evaluate claims that someoneâs work is important. Thereâs nothing special about academic work in this respect, though.
Strong disagree with this part. Academics, in the sense of âpeople who are paid to do specialised researchâ are substantially more incentivised to overstate their value than a) people who arenât paid, or b) people who are paid to do more superficial/âmulti-focus research (eg consultants), and who could therefore pivot easily if it turned out some project they were on was low value.
It sounds like youâre talking about researchers outside of academia. Academics arenât paid directly for their research, and the objective âimportanceâ of our research counts for literally nothing in tenure and promotion decisions, compared to more mundane metrics like how many papers weâve published and in what venues, and whether it is deemed suitably impressive (by disciplinary standards, which again have zero connection to objective importance) by senior evaluators within the discipline.
A tenured academic, like a supreme court justice, has a job for life which leaves them far less vulnerable to incentives than almost anyone else.
Why was this downvoted?