Today, I mentioned to someone that I tend to disagree with others on some aspects of EA community building, and they asked me to elaborate further. Here’s what I sent them, very quickly written and only lightly edited:
Hard to summarize quickly, but here’s some loose gesturing in the direction:
We should stop thinking about “community building” and instead think about “talent development”. While building a community/culture is important and useful, the wording overall sounds too much like we’re inward-focused as opposed to trying to get important things done in the world.
We should focus on the object level (what’s the probability of an extinction-level pandemic this century?) over social reality (what does Toby Ord think is the probability of an extinction-level pandemic this century?).
We should talk about AI alignment, but also broaden our horizon to not-traditionally-core-EA causes to sharpen our reasoning skills and resist insularity. Example topics I think should be more present in talent development programs are optimal taxation, cybersecurity, global migration and open borders, 1DaySooner, etc.
Useful test: Does your talent development program make sense if EA didn’t exist? (I.e., is it helping people grow and do useful things, or is it just funnelling people according to shallow metrics?)
Based on personal experience and observations of others’ development, the same person can have a much higher or much lower impact depending on the cultural environment they’re embedded in, and the incentives they perceive to affect them. Much of EA talent development should be about transmitting a particular culture that has produced impressive results in the past (and avoiding cultural pitfalls that are responsible for some of the biggest fuckups of the last decade). Shaping culture is really important, and hard to measure, and will systematically be neglected by talent metrics, and avoiding this pitfall requires constantly reminding yourself of that.
Much of the culture is shaped by incentives (such as funding, karma, event admissions, etc.). We should be really deliberate in how we set these incentives.
To be clear, are you saying your preference for the phrase ‘talent development’ over ‘community building’ is based on your concern that people hear ‘community building’ and think, ‘Oh, these people are more interested in investing in their community as an end in itself than they are in improving the world’?
I don’t know about Jonas, but I like this more from the self-directed perspective of “I am less likely to confuse myself about my own goals if I call it talent development.”
Thanks! So, to check I understand you, do you think when we engage in what we’ve traditionally called ‘community building’ we should basically just be doing talent development?
In other words, your theory of change for EA is talent development + direct work = arrival at our ultimate vision of a radically better world?[1]
E.g., a waypoint described by MacAskill as something like the below:
”(i) ending all obvious grievous contemporary harms, like war, violence and unnecessary suffering; (ii) reducing existential risk down to a very low level; (iii) securing a deliberative process for humanity as a whole, so that we make sufficient moral progress before embarking on potentially-irreversible actions like space settlement.”
All of this looks fantastic and like it should have been implemented 10 years ago. This is not something to sleep on.
The only nitpick I have is with how object level vs social reality is described. Lots of people are nowhere near ready to make difficult calculations, e.g. the experience from the COVID reopening makes it hard to predict that pandemic lockdown in the next 5 years is 40% even if that is the correct number. There’s lots of situations where the division of labor is such that deferring to people at FHI etc. is the right place to start, since these predictions are really important and not about people giving their own two cents or beginning learning the ropes of forecasting, which is what happens all too often (of course, that shouldn’t get in the way of new information and models travelling upwards, or fun community building/talent development workshops where people try out forecasting to see if its a good fit for them).
Yeah, I disagree with this on my inside view—I think “come up with your own guess of how bad and how likely future pandemics could be, with the input of others’ arguments” is a really useful exercise, and seems more useful to me than having a good probability estimate of how likely it is. I know that a lot of people find the latter more helpful though, and I can see some plausible arguments for it, so all things considered, I still think there’s some merit to that.
How to fix EA “community building”
Today, I mentioned to someone that I tend to disagree with others on some aspects of EA community building, and they asked me to elaborate further. Here’s what I sent them, very quickly written and only lightly edited:
I find it very interesting to think about the difference between what a talent development project would look like vs. a community-building project.
To be clear, are you saying your preference for the phrase ‘talent development’ over ‘community building’ is based on your concern that people hear ‘community building’ and think, ‘Oh, these people are more interested in investing in their community as an end in itself than they are in improving the world’?
I don’t know about Jonas, but I like this more from the self-directed perspective of “I am less likely to confuse myself about my own goals if I call it talent development.”
Thanks! So, to check I understand you, do you think when we engage in what we’ve traditionally called ‘community building’ we should basically just be doing talent development?
In other words, your theory of change for EA is talent development + direct work = arrival at our ultimate vision of a radically better world?[1]
Personally, I think we need a far more comprehensive social change portfolio.
E.g., a waypoint described by MacAskill as something like the below:
”(i) ending all obvious grievous contemporary harms, like war, violence and unnecessary suffering; (ii) reducing existential risk down to a very low level; (iii) securing a deliberative process for humanity as a whole, so that we make sufficient moral progress before embarking on potentially-irreversible actions like space settlement.”
Yes, this.
All of this looks fantastic and like it should have been implemented 10 years ago. This is not something to sleep on.
The only nitpick I have is with how object level vs social reality is described. Lots of people are nowhere near ready to make difficult calculations, e.g. the experience from the COVID reopening makes it hard to predict that pandemic lockdown in the next 5 years is 40% even if that is the correct number. There’s lots of situations where the division of labor is such that deferring to people at FHI etc. is the right place to start, since these predictions are really important and not about people giving their own two cents or beginning learning the ropes of forecasting, which is what happens all too often (of course, that shouldn’t get in the way of new information and models travelling upwards, or fun community building/talent development workshops where people try out forecasting to see if its a good fit for them).
Yeah, I disagree with this on my inside view—I think “come up with your own guess of how bad and how likely future pandemics could be, with the input of others’ arguments” is a really useful exercise, and seems more useful to me than having a good probability estimate of how likely it is. I know that a lot of people find the latter more helpful though, and I can see some plausible arguments for it, so all things considered, I still think there’s some merit to that.