Ultimately the operationalising needs to be done by the organisations & community leaders themselves, when they do their own planning, given the details of how they interact with the community, and while balancing the considerations raised at the leaders forum against their other priorities.
I’m happy to see more debate of how much we should prioritise AI safety. We intend to debate some of these issues on the podcast, and have already started recording with Ben Garfinkel.
However, I think you’re misrepresenting how much the key idea series recommends working on AI safety. We feature a range of other problem areas prominently and I don’t think many readers will come away thinking that our position is that “EA should focus on AI alone”.
We list 9 priority career paths, of which only 2 are directly related to AI safety, recommend a variety of other options, and say that there are many good options we don’t list.
Elsewhere on the page, we also discuss the importance of personal fit and coordination, which can make it better for an individual to enter different problem areas from those we most highlight.
The most relevant section is short, so I’d encourage readers of this thread to read the section and make up their own mind.
Yes, great paper and exciting work. Here are some further questions I’d be interested in (apologies if they result from misunderstanding the paper—I’ve only skimmed it once).
1) I’d love to see more work on Phil’s first bullet point above.
Would you guess that due to the global public good problem and impatience, that people with a low rate of pure rate of time preference will generally believe society is a long way from optimal allocation to safety, and therefore that increasing investment in safety is currently much higher impact than increasing growth?
2) What would be the impact of uncertainty about the parameters be? Should we act as if we’re generally in the eta > beta (but not much greater) regime, since that’s where altruists could have the most impact?
3) You look at the chance of humanity surviving indefinitely—but don’t we care more about something like the expected number of lives?
Might we be in the eta >> beta regime, but humanity still have a long future in expectation (i.e. tens of millions of years rather than billions). It might then still be very valuable to further extend the lifetime of civilisation, even if extinction is ultimately inevitable.
Or are there regimes where focusing on helping people in the short-term is the best thing to do?
Would looking at expected lifetime rather than probability of making it have other impacts on the conclusions? e.g. I could imagine it might be worth trading acceleration for a small increase in risk, so long as it allows more people to live in the interim in expectation.
Just a quick note that ‘double counting’ can be fine, since the counterfactual impact of different groups acting in concert doesn’t necessarily sum to 100%.
See more discussion here: https://forum.effectivealtruism.org/posts/fnBnEiwged7y5vQFf/triple-counting-impact-in-ea
Also note that you can also undercount for similar reasons. For instance, if you have impact X, but another org would have had done X otherwise, you might count your impact as zero. But that ignores that by doing X, you free up the other org to do something else high impact.
I think I’d prefer to frame this issue as something more like “how you should assign credit as a donor in order to have the best incentives for the community isn’t the same as how you’d calculate the counterfactual impact of different groups in a cost-effectiveness estimate”.
Quick answer for 80k: Paid traffic only comes from our free Google Adwords, which is a fixed budget each month. Over the last year, about 12% of the traffic was paid, roughly 10,000-20,000 users per month. This isn’t driving growth because the budget isn’t growing.
Thank you very much for doing this (long!) analysis.
Your conclusion makes sense to me and is an interesting result.
It would be interesting to think about how the survey can be adapted to better pick up these differences in future years.
This data might also be useful to cross-check against:
I just noticed that there were people who report first finding out about EA from 80k in 2009 and 2010. I’d say 80k was only informally formed in early 2011, and the name was only chosen at the end of 2011, so those survey responses must be mistaken. I gather that the sample sizes for the early years are small, so this is probably just one or two people.
That’s a complex topic, but our starting point for conversions would be the figures in the EA leaders survey: https://80000hours.org/2018/10/2018-talent-gaps-survey/
Just adding that we made a similar suggestion: that people should cut back their donations to ~1% until they’ve built up at least enough savings for 6-12 months of runway.
We also suggest here that people also prioritise saving 15% of their income for retirement ahead of substantial donations. If people want to donate beyond this level that’s commendable, but I don’t think that’s where we should set a norm.
Great! I was wondering if this might be it.
I think in practice people work on it for both reasons depending on their values.
Thanks for this analysis. If there’s time for more, I’d be keen to see something more focused on ‘level of contribution’ rather than subscriber vs. identifier. I’m not too concerned about whether someone identifies with EA, but rather with how much impact they’re able to have. It would be useful to know which sources are most responsible for the people who are most contributing.
I’m not sure what proxies you have for this in the survey data, but I’m thinking ideally of concrete achievements, like working full-time in EA; or donating over $5,000 per year.
You could also look at how dedicated to social impact they say they are combined with things like academic credentials, but these proxies are much more noisy.
One potential source of proxies is how involved someone says they are in EA, but again I don’t care about that so much compared to what they’re actually contributing.
Hi there, just a quick thought on the cause groupings in case you use them in future posts.
Currently, the post notes that global poverty is the cause most often selected as the top priority, but it should add that this is sensitive to how the causes are grouped, and there’s no single clear way to do this.
The most common division we have is probably these 4 categories: global poverty, GCRs, meta and animal welfare.
If we used this grouping, then the identifiers would report:
Global poverty: 27%
Animal welfare: 10%
(Plus Climate Change: 13% Mental health: 4%)
So, basically the top 3 areas are about the same. If climate change were grouped into GCRs, then GCRs would go up to 41% and be the clear leader.
Global poverty is a huge area that receives hundreds of billions of dollars of investment, and could arguably be divided into health, economic empowerment (e.g. cash transfers), education, policy-change etc. That could also be an option for the next version of the survey.
I’m glad we have the finer grained divisions in the survey, but we have to be careful about how to present the results.
I think it might be clearer to break up the Bay Area into SF, East Bay, North Bay and South Bay. These locations all take about an hour to travel between, which makes them comparable to London, Oxford and Cambridge (even Bristol). Including such a large area as a single category makes it much easier to rank top. Wikipedia reports that London is about 600 square miles, while the nine-county Bay Area is 7000. I appreciate that what counts as a city is not clear, but I’d definitely say the Bay Area is more than one city. (Alternatively, we could group ‘Loxbridge’ as one category.)
I agree it’s better to give the most concrete suggestions possible.
As I noted right below this quote, we do often provide specific advice on ‘Plan B’ options within our career reviews and priority paths (i.e. nearby options to pivot into).
Beyond that and with Plan Zs, I mentioned that they usually depend a great deal on the situation and are often covered by existing advice, which is why we haven’t gone into more detail before. I’m skeptical that what EAs most need is advice on how to get a job at a deli. I suspect the real problem might be more an issue of tone or implicit comparisons or something else. That said, I’m not denying this part of the site couldn’t be greatly improved.
One point of factual disagreement is that I think good general career advice is in fact quite neglected.
I definitely agree with you that existing career advice usually seems quite bad. This was one of the factors that motivated us to start 80,000 Hours.
it seems like probably I and others disappointed with the lack of broader EA career advice should do the research and write some more concrete posts on the topic ourselves.
If we thought this was good, we would likely cross-post it or link to it. (Though we’ve found working with freelance researchers tough in the past, and haven’t accepted many submissions.)
I think my hope for better broad EA career advice may be better met by a new site/organization rather than by 80k.
Potentially, though I note some challenges with this and alternative ideas in the other comments.
Here are some additions and comments on some of your points.
If I remember correctly, the EA survey suggests that 80K is an important entry point for lots of people into EA.
It’s true that this means that stakes for improving 80,000 Hours are high, but it also seems like evidence that 80,000 Hours is succeeding as an introduction for many people.
3) We talk about EA movement-building not being funding constrained. If that’s the case, then presumably it’d be possible to create more roles, be that at 80K or at new organisations.
Unfortunately lack of funding constraints doesn’t necessarily mean that it’s easy to build new teams. For instance, the community is very constrained by managers, which makes it hard to both hire junior people and set up new organisations. See more here.
Research/website like 80K’s current career profile reviews, but including less competitive career paths (perhaps this would need to focus on quantity over quality and “breadth” over depth)
Note that we have tried this in the past (e.g. allied health, web design, executive search), but they took a long time to write, never got much attention, and as far as we’re aware haven’t caused any plan changes.
I think it would also be hard to correctly direct people to the right source of advice between the two orgs.
It seems better to try to make some quick improvements to 80,000 Hours, such as adding a list of very concrete but less competitive options to the next version of our guide. (And as noted, there are already options in earning to give and government.)
Research/website/podcasts etc like 80K’s current work, but focusing on specific cause areas (e.g. animal advocacy broadly, including both farmed animals and wild animals)
Agree—I mention this in another comment.
Regular career workshops
Yes, these are already being experimented with by local effective altruism groups. However, note that there is a risk that if these become a major way people first engage with effective altruism, they could put off the people best suited for the narrow priority paths. As noted, this seems to have been a problem in our existing content, which is presumably more narrow than these new workshops would be. They’re also quite challenging to run well—often someone able to do this independently can get a full-time job at an existing organisation.
One-on-one calls seem safer, and funding someone to work independently doing calls all day seems like a reasonable use of funding to me, provided they couldn’t / wouldn’t get a more senior job. (Though it was tried by ‘EA Action’ once before, which was shut down.)
Research/webite/podcasts etc like 80K’s current work, but focused on high school age students, before they’ve made choices which significantly narrow down their options (like choosing their degree).
This seems pretty similar to SHIC: https://shicschools.org/
So it seems to me that either 80K should prioritise hiring more people to take up some of these opportunities, or EA as a movement should prioritise creating new organisations to take them up.
Unfortunately, we have very limited capacity to hire. It seems better that we focus our efforts on people who can help with our main organisational focus, which is the narrow vision. So, like I note, I think these would mainly have to be done by other organisations.