I think one issue that this topic brings up is the relative question of importance of areas. In one scenario the importance of the main EA themes (AI, nuclear security, etc.) are so much greater than the second tier (smoking rates, trade reform, etc.), that even someone who is very closely adjacent to the second tier is better off to still push onward to the first tier. In other scenarios there are possible second tier problems that would be important.
LHarrison
LHarrison’s Quick takes
[Question] Resources for Mid-Career Updates
I should say “scalable” not “harder.” The issue is the advice to a promising young graduate can be rather generic and repeatable for each graduate, with some reference to if they majored in Mandarin or something. The advice for mid-career is going to be more personal, and so it’s not scalable. But I’m sure if you sat down one on one you could do something positive.
The issue is one of transferring career capital. You could limit the pathways to the top tiers identified by EA efforts, but I would argue that if you consider career capital that may not be as transferable to the major efforts (let’s say AI and its associated impacts) it’s possible you would encourage the person to stick with a cause that would not be advisable to a new graduate. Or, if you only stick to the top EA tiers, it may be impossible to get from their current career to something relevant.
So let’s say that AI, bioterrorism, nuclear security, and the tail end of climate change are big problem areas. And you have someone who has done a lot of work on clean water and hand washing campaigns in the developing world, is well networked, fluent in various languages, etc. They may be better positioned to shift into reducing smoking rates in the developed world, which is something I just pulled from the 80,000 hours website as an example, instead of trying to pick up something that’s in the top tier. That’s an example where you can connect someone to a lower EA priority where they may be well positioned to be especially effective.
If you have someone in a government job in, say, State or Defense, you can steer them towards clearly defined EA goals. And some other areas overlap, like Treasury has some counter-terrorism and sanctions enforcement that maybe gets you moving in a more EA direction. But I don’t think EA is well prepared to offer advice for, just naming random Departments, someone in Agriculture, Transportation, Interior, or Education. That’s an example where you would really need to understand the particulars of someone work and develop new research into areas that I haven’t seen EA discuss as detailed.
This is similar to what I have been thinking, as I’ve had “Learn Mandarin” down as a goal for four years now. My idea being to do a “hobby” at first and then shift. But your point is you need the time and energy, and I’ve never been able to break out of my currently demanding job to start to pick it up.
I think this would suggest identifying concrete steps that can be taken as a “hobby” but produce something functional that can be signaled in the career marketplace. Certificates and the like.
Is it possible that the effort that would be necessary to give specific advice versus generic advice is not actually worth the effort? It’s better to try to influence people at the start of their 80,000 hours, and once someone has invested a portion of their life (one third?) it’s better to just flip them to earning to give and move on to the next 22 year old Ivy League graduate?
At the start, say someone with 79,000 hours to go, they are still essentially the same as the 22 year old Ivy League Graduate with 80,000 hours, so the advice is the same.
There’s got to be a tipping point, which probably depends on if the person’s first 10,000 or 20,000 hours built up flexible career capital or more specific. The person who became a pharmacist is more locked in than the person who got an MBA.
[Question] 20,000/40,000 Hours- MidCareer Options
I am curious about the finding of “government and policy experts” being perceived as a priority for the EA community as a whole, but not for individual organizations. The speculation in the report offers some scenarios as to what is meant by the respondents rating this highly, but I haven’t seen comments here that address this open ended question.
I comment as someone with government and policy background exploring the EA community over the last year or so with curiosity. I think I’m mid-career and looking at effective giving strategies, but trying to read more on policy roles within EA.
I think that a focus on partisan politics, and one that especially tries to narrow its scope to Republicans, suffers from lacking a firm framework of how this is supposed to create a specific outcome. One individual Republican representing a heavily Democratic district on the front lines of sea level rise discussing a carbon tax, with almost no real support from the rest of his caucus, is an aberration.
Across the board, Republican politicians oppose carbon taxes, the House took such a vote this week and the efforts by CCL to provide cover to the Republicans in the Climate Solutions Caucus who voted for a resolution opposing carbon taxes seems like the very definition of ineffective.
If there’s a case for engagement in the political process around climate change, it’s looking at the risks of climate change and determining the most effective strategies to adapt to them. For example, perhaps a certain degree of sea level rise is baked into the cake and an effective policy response is reducing exposure of properties to this risk. So coastal resiliency and flood insurance reform would make sense. However while some of the values of properties and communities involved in, say, significant flooding in Miami may be high, I don’t know if it’s that significant in any sort of global sense.
Distribution of Impact Across General Population
Has there been much research on the estimated impact that the general population has, and what sort of distribution curve would be created?
The vast majority of people would tend to lead relatively inconsequential lives. But is there a normal distribution around an impact of zero? But that assumes equal numbers of people with both successively greater or net negative impacts. So instead would we expect a very long tail of high positive impact people?