I’ll add two more potential traps. There’s overlap with some of the existing ones but I think these are worth mentioning on their own.
9) Object level work may contribute more learning value.
I think it’s plausible that the community will learn more if it’s more focused on object level work. There are several plausible mechanisms. For example (not comprehensive): object level work might have better feedback loops, object level work may build broader networks that can be used for learning about specific causes, or developing an expert inside view on an area may be the best way to improve your modelling of the world. (Think about liberal arts colleges’ claim that it’s worth having a major even if your educational goals are broad “critical thinking” skills.)
I’m eliding here over lots of open questions about how to model the learning of a community. For example: is it more efficient for communities to learn by their current members learning or by recruiting new members with preexisting knowledge/skills?
I don’t have an answer to this question but when I think about it I try to take the perspective of a hypothetical EA community ten years from now and ask whether it would prefer to primarily be made up of people with ten years’ experience working on meta causes or a biologist, a computer scientist, a lawyer, etc. . .
10) The most valuable types of capital may be “cause specific”
I suppose (9) is a subset of (10). But it may be that it’s important to invest today on capital that will pay off tomorrow. (E.G. See 80k on career capital.) And cause specific opportunities may be better developed (and have higher returns) than meta ones. So, learning value aside, it may be valuable for EA to have lots of people who invested in graduate degrees or building professional networks. But these types of opportunities may sometimes require you to do object level work.
My broader claim would be that if we had a model where most of the activities that can usefully be augmented will come from folks with: i) great expertise in one of several fields ii) excellent epistemics iii) low risk aversion then the movement would de-prioritize grassroots meta, and change its emphasis, while upweighting direct activities and subfield-specific meta.
9) seems pretty compelling to me. To use some analogies from the business world: it wouldn’t make sense for a company to hire lots of people before it had a business model figured out, or run a big marketing campaign while its product was still being developed. Sometimes it feels to me like EA is doing those things. (But maybe that’s just because I am less satisfied with the current EA “business model”/”product” than most people.)
I’ll add two more potential traps. There’s overlap with some of the existing ones but I think these are worth mentioning on their own.
9) Object level work may contribute more learning value.
I think it’s plausible that the community will learn more if it’s more focused on object level work. There are several plausible mechanisms. For example (not comprehensive): object level work might have better feedback loops, object level work may build broader networks that can be used for learning about specific causes, or developing an expert inside view on an area may be the best way to improve your modelling of the world. (Think about liberal arts colleges’ claim that it’s worth having a major even if your educational goals are broad “critical thinking” skills.)
I’m eliding here over lots of open questions about how to model the learning of a community. For example: is it more efficient for communities to learn by their current members learning or by recruiting new members with preexisting knowledge/skills?
I don’t have an answer to this question but when I think about it I try to take the perspective of a hypothetical EA community ten years from now and ask whether it would prefer to primarily be made up of people with ten years’ experience working on meta causes or a biologist, a computer scientist, a lawyer, etc. . .
10) The most valuable types of capital may be “cause specific”
I suppose (9) is a subset of (10). But it may be that it’s important to invest today on capital that will pay off tomorrow. (E.G. See 80k on career capital.) And cause specific opportunities may be better developed (and have higher returns) than meta ones. So, learning value aside, it may be valuable for EA to have lots of people who invested in graduate degrees or building professional networks. But these types of opportunities may sometimes require you to do object level work.
These traps are fairly compelling.
My broader claim would be that if we had a model where most of the activities that can usefully be augmented will come from folks with: i) great expertise in one of several fields ii) excellent epistemics iii) low risk aversion then the movement would de-prioritize grassroots meta, and change its emphasis, while upweighting direct activities and subfield-specific meta.
9) seems pretty compelling to me. To use some analogies from the business world: it wouldn’t make sense for a company to hire lots of people before it had a business model figured out, or run a big marketing campaign while its product was still being developed. Sometimes it feels to me like EA is doing those things. (But maybe that’s just because I am less satisfied with the current EA “business model”/”product” than most people.)
“But maybe that’s just because I am less satisfied with the current EA “business model”/”product” than most people.”
Care to elaborate (or link to something?)
https://www.facebook.com/groups/effective.altruists/permalink/1263971716992516/