0) I am confident I understand; I just think it’s wrong. My impression is HIM’s activity is less ‘using reason and evidence to work out what does the most good’, but rather ‘using reason and evidence to best reconcile prior career commitments with EA principles’.
By analogy, if I was passionate about (e.g.) HIV/AIDS, education, or cancer treatment in LICs, the EA recommendation would not (/should not) be I presume I maintain this committment, but rather soberly evaluate how interventions within these areas stack up versus all others (with the expectation I would be very unlikely to discover the best interventions which emerge from this analysis will line up with what my passions previously aligted upon). Instead setting up a ‘Givewell for education interventions’ largely misses the point (and most of the EV ‘on the table’).
So too here. It would be surprising to discover medical careers—typically selected before acquaintance with EA principles—would be optimal or near-optimal by their lights (I’d be surprised if m/any EAs who weren’t already doctors thought it was). The face-value analysis is pessimistic on the ‘is this best’ question, notwithstanding (e.g.) there is a lot of variance within field to optimise: HIV/AIDS interventions vary in effectiveness by orders of magnitude, yet that doesn’t make them priorities on the current margin. As, to a first approximation, reality works in first-order terms, we’d want some very good reasons for second order considerations nonetheless carrying the day: sentiments like ‘big tent’, ‘EA is a question’ (etc.) can support anything (would it apply to PlayPumps?), so we should attempt to weigh these things up.
Your first point of clarification illustrates the ‘opacity’ I have in mind. “Not necessarily encouraging” folks to apply to medical school implies a lot of epistemic wiggle room: “Should I enter medicine?” and “Should I leave medicine?” are different but closely related questions (consider a 17 year old applying to medicine versus an 18 year old first year student), and answers to the former sense-check answers to the latter. If you really think having impact as a doctor is for many people some of the best things they can do, this suggests for similar people you would encourage them entering the profession (this doesn’t imply HIM should start doing this, but I think most in EA-land would find this result surprising and worth exploration—not least, it suggests a re-write of the 80k profile.) In contrast, if the answer is “even for those initially minded to enter medicine, we’d usually recommend against it as an EA career choice”, then there should be a story why this usual recommendation is greatly attenuated (or reversed?) for those already in the profession—particularly at an early stage like medical school. Again, this doesn’t govern HIM strategy—but it is informative, and knowing what you yourself think is the answer is important for transparent communication with your audience (even if they find this uncomfortable).
1) Regardless of the semantics of whether one should call someone like myself a ‘medic’ or not now, the substantive issue seems to be around whether medicine (generally speaking) is a high impact activity or not. Suppose (i.e. I’m not claiming this is the story for either of these professions I use as examples) (a):
‘High Impact law’: where the folks in the profession find their highest impact options often involve the practice of law in their ‘day job’, or ‘not strictly legal’ roles where their legal training is an important-to-crucial piece of career capital.
Contrast (b):
‘High Impact accountancy’: where folks in this profession find their highest impact options very rarely involve the practice of accountancy, and their best career options are typically those where their accounting background is only tangentially relevant (e.g. acquaintance with business operations, a ‘head for figures’).
In the latter case, ‘high impact accountancy’ looks like an odd term if the real message is to provide accountants with better career options which typically involve leaving the profession. If medicine was like (a), all seems well; but I think it is like (b), thus we disagree.
2) I’d be surprised if most of the folks I mentioned would find several years of medical experience valauble—especially (for the key question of career choice) whether this was a leading opportunity versus alternative ways of spending 10-20% of their working lives. I can ask around, but if you have testimony to hand you’re welcome to correct me. I’d guess medical experience is much more relevant for much more medically adjacent (or simply medical) careers—but, per grandparent, these careers tend to be a lot less impactful in the first place.
3) Our hypothetical Alice may be right about the options you note being ‘higher impact’ than typical practice. Yet effectiveness is multiplier stacking (cf.), so Bob (who doesn’t labour under the ‘having impact as a doctor’ constraint) can still expect 10-100x more impact. The latter two examples you give (re. earning to give and working in a LIC) allow direct estimation:
Re. E2G, US and UK doctors are in the top ~5% of their respective populations in earnings. Many other careers plausibly accessible to doctors (e.g. technical start-ups, quant trading, SWE, consulting) have income distributions which have either dramatically higher expected earnings, higher median earnings (e.g. friends of mine in some of these fields had higher starting salary than my expected peak medical salary), or both. This all sets aside that marginal returns to further money where there is a lot of aligned money looking for interventions to fund may be much lower now (cf. ‘earning to give’ careers typically finding themselves a long way down 80k recommendations; forum discourse ad nauseum about ‘talent constraint’, unease about all the lucre sloshing around, etc. etc.).
Re. LIC practice, if we take the 2-3 omag multiplier at face value (this looks implausible at the upper end), then combining that with 2ish DALYs/year of practice in a high income countries (taking my figures at face value, which are likely too high, you get 2*300 = 600 DALYs. In Givewell donations, with some conversion of (say) 40 DALYs = one ‘life saved’ (not wildly unreasonable as the lives saved are typically <5 year olds), this is ~~ 70 000 dollars/year. This is in the reach of E2G doctors (leave alone careers E2G more broadly), and the real number is almost surely lower (probably by an integer factor): the ‘medical practice’ side of the equation is much less rigorous than the givewell CEE, and should be anticipated to regress down.
As you say, various constraints (professional or personal) may rule out these other options: perhaps I aim at earning to give, but it happens that medical practice is my most lucrative employment (obviously much more plausible if one is later in one’s career); perhaps even if in general the sort of person drawn to medicine can make better contributions outside of the profession, this is not true for me in particular. Yet candour seems to oblige foregrounding such constraints often cut 90%+ of potential impact (and thus the importance of testing whether these constraints are strict).
4) Although comparators are tricky (e.g. if my writing on medical careers was vastly less effective it would be hard to tell), the content of the career plan changes noted in the OP would be more or less reassuring. re. what high impact med is accomplishing. Per above, as getting the last multipliers are important, HIM’s impact is largely determined by the tail of highest impact plan changes.
On the object-level for your examples, I think for “high-impact architecture,” having people with nontrivial background in architecture is likely useful for building civilizational refuges. More directly, I’ve talked to people who think that having 1-3 EA concierge doctors in the community (who can do things like understand our cultural contexts and weird problems and prescribe medicine in jurisdictions like the US and the UK) can be extremely helpful in increasing the impact of top talent in EA. This is analogous to the impact of e.g. existing community health or mental health workers in the community.
Potentially relevant subquestions:
To what extent does work in EA require EA alignment and acculturation?
The more you think EA orgs can hire well outside of EA for projects outside of EA natural core competencies, the more it matters that EAs target a relatively small subset of high-impact careers and skillsets to specialize in.
Conversely, if you think (as I do) that alignment and acculturation is just really important for excelling in EA jobs, it matters that we have people acquiring a wider scope of jobs and skillsets.
Do we live in a “big world” or a “small world” of EA things to do?
If we think there’s a narrow set of the best actions and causes, and a small number of people working in any of them, it matters more that individuals optimize for selecting the best things to do, on a birds’ eye view.
If, conversely, we think the range of really good actions and causes is relatively wide, then it matters more that individuals weigh factors like personal fit heavily.
An potential argument here is that the profile you wrote on doctoring was in the context of back when EA was much smaller. We may expect conditions “on the ground” to have changed a lot, and while “concierge EA doctor” would be a dumb career to aspire to five years ago, perhaps it is less so now.
(I personally think we likely still live in a relatively small world, which I think undercuts my counterarguments significantly).
Relatedly, how important is EA exploration vs exploitation?
How damning is the danger of introducing people with worse epistemics into the EA movement? And is worsening epistemics the most important/salient downside risk?
What are the best ways to prevent the above from happening?
Is it having really good first-order reasoning and arguments?
Is it having really good all-things-considered views that try to track all the important considerations, including rather estoric ones?
I have some hope that splitting out votes into two dimensions (approval and agreement) might help with situations like this. At least it seems to have helped with some recent AI-adjacent threads on LW that were also pretty divisive.
Howdy, and belatedly:
0) I am confident I understand; I just think it’s wrong. My impression is HIM’s activity is less ‘using reason and evidence to work out what does the most good’, but rather ‘using reason and evidence to best reconcile prior career commitments with EA principles’.
By analogy, if I was passionate about (e.g.) HIV/AIDS, education, or cancer treatment in LICs, the EA recommendation would not (/should not) be I presume I maintain this committment, but rather soberly evaluate how interventions within these areas stack up versus all others (with the expectation I would be very unlikely to discover the best interventions which emerge from this analysis will line up with what my passions previously aligted upon). Instead setting up a ‘Givewell for education interventions’ largely misses the point (and most of the EV ‘on the table’).
So too here. It would be surprising to discover medical careers—typically selected before acquaintance with EA principles—would be optimal or near-optimal by their lights (I’d be surprised if m/any EAs who weren’t already doctors thought it was). The face-value analysis is pessimistic on the ‘is this best’ question, notwithstanding (e.g.) there is a lot of variance within field to optimise: HIV/AIDS interventions vary in effectiveness by orders of magnitude, yet that doesn’t make them priorities on the current margin. As, to a first approximation, reality works in first-order terms, we’d want some very good reasons for second order considerations nonetheless carrying the day: sentiments like ‘big tent’, ‘EA is a question’ (etc.) can support anything (would it apply to PlayPumps?), so we should attempt to weigh these things up.
Your first point of clarification illustrates the ‘opacity’ I have in mind. “Not necessarily encouraging” folks to apply to medical school implies a lot of epistemic wiggle room: “Should I enter medicine?” and “Should I leave medicine?” are different but closely related questions (consider a 17 year old applying to medicine versus an 18 year old first year student), and answers to the former sense-check answers to the latter. If you really think having impact as a doctor is for many people some of the best things they can do, this suggests for similar people you would encourage them entering the profession (this doesn’t imply HIM should start doing this, but I think most in EA-land would find this result surprising and worth exploration—not least, it suggests a re-write of the 80k profile.) In contrast, if the answer is “even for those initially minded to enter medicine, we’d usually recommend against it as an EA career choice”, then there should be a story why this usual recommendation is greatly attenuated (or reversed?) for those already in the profession—particularly at an early stage like medical school. Again, this doesn’t govern HIM strategy—but it is informative, and knowing what you yourself think is the answer is important for transparent communication with your audience (even if they find this uncomfortable).
1) Regardless of the semantics of whether one should call someone like myself a ‘medic’ or not now, the substantive issue seems to be around whether medicine (generally speaking) is a high impact activity or not. Suppose (i.e. I’m not claiming this is the story for either of these professions I use as examples) (a):
‘High Impact law’: where the folks in the profession find their highest impact options often involve the practice of law in their ‘day job’, or ‘not strictly legal’ roles where their legal training is an important-to-crucial piece of career capital.
Contrast (b):
‘High Impact accountancy’: where folks in this profession find their highest impact options very rarely involve the practice of accountancy, and their best career options are typically those where their accounting background is only tangentially relevant (e.g. acquaintance with business operations, a ‘head for figures’).
In the latter case, ‘high impact accountancy’ looks like an odd term if the real message is to provide accountants with better career options which typically involve leaving the profession. If medicine was like (a), all seems well; but I think it is like (b), thus we disagree.
2) I’d be surprised if most of the folks I mentioned would find several years of medical experience valauble—especially (for the key question of career choice) whether this was a leading opportunity versus alternative ways of spending 10-20% of their working lives. I can ask around, but if you have testimony to hand you’re welcome to correct me. I’d guess medical experience is much more relevant for much more medically adjacent (or simply medical) careers—but, per grandparent, these careers tend to be a lot less impactful in the first place.
3) Our hypothetical Alice may be right about the options you note being ‘higher impact’ than typical practice. Yet effectiveness is multiplier stacking (cf.), so Bob (who doesn’t labour under the ‘having impact as a doctor’ constraint) can still expect 10-100x more impact. The latter two examples you give (re. earning to give and working in a LIC) allow direct estimation:
Re. E2G, US and UK doctors are in the top ~5% of their respective populations in earnings. Many other careers plausibly accessible to doctors (e.g. technical start-ups, quant trading, SWE, consulting) have income distributions which have either dramatically higher expected earnings, higher median earnings (e.g. friends of mine in some of these fields had higher starting salary than my expected peak medical salary), or both. This all sets aside that marginal returns to further money where there is a lot of aligned money looking for interventions to fund may be much lower now (cf. ‘earning to give’ careers typically finding themselves a long way down 80k recommendations; forum discourse ad nauseum about ‘talent constraint’, unease about all the lucre sloshing around, etc. etc.).
Re. LIC practice, if we take the 2-3 omag multiplier at face value (this looks implausible at the upper end), then combining that with 2ish DALYs/year of practice in a high income countries (taking my figures at face value, which are likely too high, you get 2*300 = 600 DALYs. In Givewell donations, with some conversion of (say) 40 DALYs = one ‘life saved’ (not wildly unreasonable as the lives saved are typically <5 year olds), this is ~~ 70 000 dollars/year. This is in the reach of E2G doctors (leave alone careers E2G more broadly), and the real number is almost surely lower (probably by an integer factor): the ‘medical practice’ side of the equation is much less rigorous than the givewell CEE, and should be anticipated to regress down.
As you say, various constraints (professional or personal) may rule out these other options: perhaps I aim at earning to give, but it happens that medical practice is my most lucrative employment (obviously much more plausible if one is later in one’s career); perhaps even if in general the sort of person drawn to medicine can make better contributions outside of the profession, this is not true for me in particular. Yet candour seems to oblige foregrounding such constraints often cut 90%+ of potential impact (and thus the importance of testing whether these constraints are strict).
4) Although comparators are tricky (e.g. if my writing on medical careers was vastly less effective it would be hard to tell), the content of the career plan changes noted in the OP would be more or less reassuring. re. what high impact med is accomplishing. Per above, as getting the last multipliers are important, HIM’s impact is largely determined by the tail of highest impact plan changes.
I think your two comments here are well-argued, internally consistent, and strong. However, I think I disagree with
in the context of EA career choice writ large, which I think may be enough to flip the bottom-line conclusion.
I think the crux for me is that I think if the differences in object-level impact across people/projects is high enough, then for anybody whose career or project is not in the small subset of the most impactful careers/projects, their object-level impacts will likely be dwarfed by the meta-level impact.
On the object-level for your examples, I think for “high-impact architecture,” having people with nontrivial background in architecture is likely useful for building civilizational refuges. More directly, I’ve talked to people who think that having 1-3 EA concierge doctors in the community (who can do things like understand our cultural contexts and weird problems and prescribe medicine in jurisdictions like the US and the UK) can be extremely helpful in increasing the impact of top talent in EA. This is analogous to the impact of e.g. existing community health or mental health workers in the community.
Potentially relevant subquestions:
To what extent does work in EA require EA alignment and acculturation?
The more you think EA orgs can hire well outside of EA for projects outside of EA natural core competencies, the more it matters that EAs target a relatively small subset of high-impact careers and skillsets to specialize in.
Conversely, if you think (as I do) that alignment and acculturation is just really important for excelling in EA jobs, it matters that we have people acquiring a wider scope of jobs and skillsets.
Do we live in a “big world” or a “small world” of EA things to do?
If we think there’s a narrow set of the best actions and causes, and a small number of people working in any of them, it matters more that individuals optimize for selecting the best things to do, on a birds’ eye view.
If, conversely, we think the range of really good actions and causes is relatively wide, then it matters more that individuals weigh factors like personal fit heavily.
An potential argument here is that the profile you wrote on doctoring was in the context of back when EA was much smaller. We may expect conditions “on the ground” to have changed a lot, and while “concierge EA doctor” would be a dumb career to aspire to five years ago, perhaps it is less so now.
(I personally think we likely still live in a relatively small world, which I think undercuts my counterarguments significantly).
Relatedly, how important is EA exploration vs exploitation?
How damning is the danger of introducing people with worse epistemics into the EA movement? And is worsening epistemics the most important/salient downside risk?
What are the best ways to prevent the above from happening?
Is it having really good first-order reasoning and arguments?
Is it having really good all-things-considered views that try to track all the important considerations, including rather estoric ones?
???
It seems bizarre that, without my strong upvote, this comment is at minus 3 karma.
Karma polarization seems to have become much worse recently. I think a revision of the karma system is urgently needed.
I have some hope that splitting out votes into two dimensions (approval and agreement) might help with situations like this. At least it seems to have helped with some recent AI-adjacent threads on LW that were also pretty divisive.
Yes, that is also my hope. Thanks for developing this.
It might this might just be the work one or two people. Maybe the mods can take a look?
We could create a script (using a sprinkling of NLP or classifier) to identify unreasonably downvoted comments and show how prevalent this is.