This request is extremely unreasonable and I am downvoting and replying (despite agreeing with your core claim) specifically to make a point of not giving in to such unreasonable requests, or allowing a culture of making them with impunity to grow. I hope in the future to read posts about your ideas that make your points without such attempts to manipulate readers.
lexande
It seems unlikely that the distribution of 100x-1000x impact people is *exactly* the same between your “network” and “community” groups, and if it’s even a little bit biased towards one or the other the groups would wind up very far from having equal average impact per person. I agree it’s not obvious which way such a bias would go. (I do expect the community helps its members have higher impact compared to their personal counterfactuals, but perhaps e.g. people are more likely to join the community if they are disappointed with their current impact levels? Alternatively, maybe everybody else is swamped by the question of which group you put Moskovitz in?) However assuming the multiplier is close to 1 rather than much higher or lower seems unwarranted, and this seems to be a key question on which the rest of your conclusions more or less depend.
I’ve made the diagram assuming equal average impact whether someone is in the ‘community’ or ‘network’ but even if you doubled or tripled the average amount of impact you think someone in the community has there would still be more overall impact in the network.
People in EA regularly talk about the most effective community members having 100x or 1000x the impact of a typical EA-adjacent person, with impact following a power law distribution. For example, 80k attempts to measure “impact-adjusted significant plan changes” as a result of their work, where a “1“ is a median GWWC pledger (which is already more commitment than a lot of EA-adjacent people, who are curious students or giving-1% or something, so maybe 0.1). 80k claims credit for dozens of “rated 10” plan changes per year, a handful of “rated 100” per year, and at least one “rated 1000” (see p15 of their 2018 annual report here).
I’m personally skeptical of some of the assumptions about future expected impact 80k rely on when making these estimates, and some of their “plan changes” are presumably by people who would fall under “network” and not “community” in your taxonomy. (Indeed on my own career coaching call with them they said they thought their coaching was most likely to be helpful to people new to the EA community, though they think it can provide some value to people more familiar with EA ideas as well.) But it seems very strange for you to anchor on a 1-3x community vs network impact multiplier, without engaging with core EA orgs’ belief that 100x-10000x differences between EA-adjacent people are plausible.
The specific alternatives will vary depending on the path in question and hard to predict things about the future. But if someone spends 5-10 years building career capital to get an operations job at an EA org, and then it turns out that field is extremely crowded with the vast majority of applicants unable to get such jobs, their alternatives may be limited to operations jobs at ineffective charities or random businesses, which may leave them much worse off (both personally and in terms of impact) than if they’d never encountered advice to go into operations (and had instead followed one of the more common career path for ambitious graduates, and been able to donate more as a result).
I’m also concerned about broader changes in how we think about priority paths over the coming 5-10 years. A few years ago, 80k strongly recommended going into management consulting, or trying to found a tech startup. Somebody who made multi-year plans and sacrifices based on that advice would find today that 80k now considers what they did to have been of little value.
It’s also important to remember that if in 10 years some 80,000 Hours-recommended career path, such as AI policy, is less neglected than it used to be, that is a good thing, and doesn’t undermine people having worked toward it—it’s less neglected in this case because more people worked toward it.
80,000 Hours has a responsibility to the people who put their trust in it when making their most important life decisions, to do everything it reasonably can to ensure that its advice does not make them worse off, even if betraying their trust would (considered narrowly/naively) lead to an increase in global utility. Comments like the above, as well as the negligence in posting warnings on outdated/unendorsed pages until months or years later, comments elsewhere in the thread worrying about screening off people who 80k’s advice could help while ignoring the importance of screening off those who it would hurt, and the lack of attention to backup plans, all give me the impression that 80k doesn’t really care about the outcomes of the individual people who trust it, and certainly doesn’t take its responsibility towards them as seriously as it should. Is this true? Do I need to warn people I care about to avoid relying on 80k for advice and read its pages only with caution and suspicion?
That applies to most of the deprecated pages, but doesn’t apply to the quiz, because its results are based on the database of existing career reviews. The fact that it gives the same results for nearly everybody is the result of new reviews being added to that database since it was written/calibrated. It’s not actually possible to get it to show you the results it would have showed you back in 2016 the last time it was at all endorsed.
Why don’t you just take it down entirely? It’s already basically non-functional.
These days 80k explicitly advises against trying to build flexible career capital (though I think they’re probably wrong about this).
Note that the “policy-oriented government job” article is specific to the UK. Some of the arguments about impact may generalize but the civil service in the UK in general has more influence on policy than in the US or some other countries, and the more specific information about paths in etc doesn’t really generalize at all.
Have they ever admitted to specifically targeting graduates of elite colleges rather than ambitious graduates generally?
On career capital: I find it quite hard to square your comments that “Most readers who are still early in their careers must spend considerable time building targeted career capital before they can enter the roles 80,000 Hours promotes most” and “building career capital that’s relevant to where you want to be in 5 or 10 years is often exactly what you should be doing” with the comments from the annual report that “You can get good career capital in positions with high immediate impact (especially problem-area specific career capital), including most of those we recommend” and “Discount rates on aligned-talent are quite high in some of the priority paths, and seem to have increased, making career capital less valuable.” Reading the annual report definitely gives me the impression that 80k absolutely does not endorse spending 5-10 years in low-impact roles to try to build career capital for most people, and so if this is incorrect then it seems like further clarification of 80k’s views on career capital on the website should be a high priority.
On planning: While I expect 80k’s current priority paths will probably all still be considered important in 5-10 years time, it’s harder to predict whether they will still be considered neglected. It’s easy to imagine some of the fields in question becoming very crowded with qualified candidates, such that people who start working towards them now will have extreme difficulty getting hired in a target role 5-10 years from now, and will have low counterfactual impact if they do get hired. (It’s also possible, though less likely, that estimates of the tractability of some of the priorities will decline.)
On outdated content: I appreciate 80k’s efforts to tag content that is no longer endorsed, but there have often been long delays between new contradictory content being posted and old posts being tagged (even when the new post links to the old post, so it’s not like it would have required extra effort to find posts needing tagging). Further, posts about the new position sometimes fail to engage with the arguments for the old position. And in many cases I’m not sure what purpose is served by leaving the old posts up at all. (It’s not like taking them down would be hiding anything, they’d still be on archive.org.)
On article targeting: In your original post you gave the example of 80k deliberately working to create more content targeted at people later in their careers, and this winding up discouraging some readers who are still early in their careers. Surely at least in that case you could have been explicit about the different audience you were deliberately targeting? More generally, you express concern about “screening off people who could benefit from the research”, but while such false negatives are bad, failing to screen off people for whom your advice would be useless or harmful is also bad, and I think 80k currently errs significantly in the latter direction. I also find it worrying if not even 80k’s authors know who their advice is written for, since knowing your target audience is a foundational requirement for communication of any kind and especially important when communicating advice if you want it to be useful and not counterproductive.
Yeah, my response was directed at cole_haus suggesting the quiz as an example of 80k currently providing personalized content, when in fact it’s pretty clearly deprecated, unmaintained, and no longer linked anywhere prominent within the site. (Though I’m not sure what purpose keeping it up at all serves at this point.)
Last I checked, the career quiz recommends almost everyone (including everyone “early” and “middle” career, no matter their other responses) either “Policy-oriented [UK] government jobs” or “[US] Congressional staffer”, so it hardly seems very reflective of actually believing that the “list” is very different for different people.
The posts I linked on whether it’s worth pursuing flexible long-term career capital (yes says the Career Guide page, no says a section buried in an Annual Report, though they finally added a note/link from the yes page to the no page a year later when I pointed it out to them) are one example.
The “clarifying talent gaps” blog post largely contradicts an earlier post still linked from the “Research > Overview (start here)” page expressing concerns about an impending shortage of direct workers in general, as well as Key Articles suggesting that “almost all graduates” should seek jobs in research, policy or EA orgs (with earning-to-give only as a last resort) regardless of their specific skills. The latter in turn contradict pages still in the Career Guide and other posts emphasizing earning-to-give as potentially superior to even such high-impact careers as nonprofit CEO or vaccine research.
Earlier they changed their minds on replaceability (before, after); the deprecated view there is no longer prominently linked anywhere but I’m unsure of the wisdom of leaving it up at all.
Given how much 80k’s views have changed over the past 5-10 years, it’s hard to be optimistic about the prospects for successfully building narrow career capital targeted to the skill bottlenecks of 5-10 years from now!
1) If the way you talk about career capital here is indicative of 80k’s current thinking then it sounds like they’ve changed their position AGAIN, mostly reversing their position from 2018 (that you should focus on roles with high immediate impact or on acquiring very specific narrow career capital as quickly as possible) and returning to something more like their position from 2016 (that your impact will mostly come many years into your career so you should focus on building career capital to be in the best position then). It’s possible the new position is a synthesis somewhere between these two, since you do include the word “targeted”, but how well can people feasibly target narrow career capital 5-10 years out when the skill bottlenecks of the future will surely be different?
2) In general I’ve noticed a pattern (of which the above two linked posts are an example) where 80k posts something like “our posts stating that ‘A is true’ have inadvertently caused many people to believe that A is true, here’s why A is actually false” while leaving up the old posts that say ‘A is true’ (sometimes without even a note that they might be outdated). This is especially bad when the older ‘A is true’ content is linked conveniently from the front page while the more recent updates are buried in blog history. Is it feasible for 80k to be more aggressive in taking down pages they no longer endorse so they at least don’t continue to do damage, and so the rest of us can more easily keep track of what 80k actually currently believes?
3) Regarding the problem of a diverse audience with differing needs, an obvious strategy for dealing with this is to explicitly state (for each page or section of the site) who the intended audience is. I’ve found that 80k seems strangely reluctant to answer what level of human/social/career/financial capital they assume their audience has, even when asked directly.
Most people don’t have the skills required to manage themselves, start their own org, organize their own event, etc; a large fraction of people need someone else to assign them tasks to even keep their own household running. Helping people get better at management skills (at least for managing themselves, though ability to manage others as well would be ideal) could potentially be very high-value. There don’t seem to be many good resources on how to do this currently.
Are you able to briefly characterize here who your intended audience is, if we’re mistaken about it being “top half of Oxford” or similar? I guess it varies some between pages.
- Apr 17, 2019, 5:47 PM; 26 points) 's comment on Thoughts on 80,000 Hours’ research that might help with job-search frustrations by (
Can you say more about your experiences as a teacher and as a policy professional? What did you have to do to get those jobs, and what were the expectations once you had them? What was the pay like? Were you able to observe the interview/hiring process for anybody else being hired for the same jobs? This is exactly the kind of concrete info I’m hoping to find more of.
I’m not convinced it’s the impact-maximizing approach either. Some people who could potentially win the career “lottery” and have a truly extraordinary impact might reasonably be put off early on by advice that doesn’t seem to care adequately about what happens to them in the case where they don’t win.
We encourage people to make a ranking of options, then their back-up plan B is a less competitive option than your plan A that you can switch into if plan A doesn’t work out. Then Plan Z is how to get back on your feet if lots goes wrong. We lead people through a process to come up with their Plan B and Plan Z in our career planning tool.
This tool provides a good overall framework for thinking about career choices, but my answer to many of its questions is “I don’t know, that’s why I’m asking you”. On the specific subject of making a Plan Z, it appears the sum total of what it says is “Some common examples of Plan Z include: move back in with parents and work at deli from last summer; sleep on a friend’s sofa and spend savings until you can find a job; doing private tutoring.” These depend on resources many people don’t have, and in fact have plenty of ways they can go wrong themselves (the deli might decline to hire you, you might run out of savings before you can find a job, you might be unable to find any tutoring clients). Certainly I wouldn’t be willing to take a major career risk if one of those were my only backup plan, without a lot more concrete data on tractability (which basically doesn’t exist as far as I know; I don’t think anybody publishes acceptance rates for jobs at local delis).
I understand this isn’t your focus, just noting that my concerns on that point still apply.
Unfortunately this competes with the importance of interventions failing fast. If it’s going to take several years before the expected benefits of an intervention are clearly distinguishable from noise, there is a high risk that you’ll waste a lot of time on it before finding out it didn’t actually help, you won’t be able to experiment with different variants of the intervention to find out which work best, and even if you’re confident it will help you might find it infeasible to maintain motivation when the reward feedback loop is so long.