Mid-level EA communities, and cultivating the skill of thinking
I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you’ve read all the introductory content, but before you’re ready to tackle anything real ambitious… what should you do, and what should your local EA community encourage people to do?
My sense is that grassroots EA groups default to “discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory.”
I have varying opinions on those things, but even if they were all good ideas… they leave an unsolved problem where there isn’t a very good “bread and butter” activity that you can do repeatedly, that continues to be interesting after you’ve learned the basics.
My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like “figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it’s okay if to not do a very good job because you’re still learning.”
I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are:
LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking
Updating the Cause Prioritization wiki (either this one or this one, I’m not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts.
I’m interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive version of itself.
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they’re looking for someone to manage them.
Other people are happy to do things on their own, but they don’t have the necessary skills and experience, so they will end up doing something that’s useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
Hmm, it’s not so much the classic rationalist trait of overthinking that I’m concerned about. It’s more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of “practicing thinking”. If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can’t let your brain know that that’s what you’re trying to achieve.
Second, “thinking for real” sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you’ll waste time on producing research which looks nice and impressive and all that, but in the end doesn’t help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of “doing research-assistant-kind-of-work for other people who know what they’re doing” and gradually work their way up to “being one of the people who know what they’re doing”, that would make this work.
You wouldn’t be “practicing thinking”; you could easily convince your brain that you’re actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you’re working on is for real.
And, by the same token, you’d be working on something that (someone believes) needs to be done. And maybe sometimes you’d realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here’s why, etc.—and that’s how you’d gradually grow to be one of the people who know what they’re doing.
Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:
I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
So the longterm Earn-to-Give options are:
Actually becoming pretty good at vetting organizations and people
Joining donor lotteries (where you still might have to get good at thinking if you win)
Donating to GiveDirectly (which is maybe actually fine but less exciting)
The world isn’t okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you’re trying to help with.
I think all of these require a general thinking skill that is hard to come by and really needs practice.
(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)
Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn’t scale. There are communities and movements that are designed such that there’s lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don’t think EA is one of them.
I’ve heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.
I agree that EA does not have 1000 volunteer jobs. However, here is a list of some possibilities. I know ALLFED could still effectively utilize more volunteers.
My claim is just that “volunteer at an org” is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn’t to say volunteers aren’t valuable, or that many EAs shouldn’t explore that as an option, or that better coordination tools to improve the situation shouldn’t be built.
But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said “huh, it looks like there should be all this free labor available by passionate people, can’t we connect these people with orgs that need volunteers?” and tried to build some kind of tool to help with that, it turned out that most people aren’t actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.
My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.
(Again, not arguing that ALLFED shouldn’t look for volunteers or that EAs shouldn’t volunteer at ALLFED, esp. if my experience doesn’t match yours. I’d encourage anyone reading this who’s looking for projects to give ALLFED volunteering a look.)
The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they’re either young and lacking some core “figure out how to be helpful and actually help” skills, or they’re older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.
I think the *End* of the Middle of the funnel is more of where “volunteer at EA orgs” makes sense. And people in the Middle of the Middle who think they have the “figure out how to be helpful and help” property should do so if they’re self-motivated to. (If they’re not self motivated they’re probably not a good volunteer)
Mid-level EA communities, and cultivating the skill of thinking
I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you’ve read all the introductory content, but before you’re ready to tackle anything real ambitious… what should you do, and what should your local EA community encourage people to do?
My sense is that grassroots EA groups default to “discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory.”
I have varying opinions on those things, but even if they were all good ideas… they leave an unsolved problem where there isn’t a very good “bread and butter” activity that you can do repeatedly, that continues to be interesting after you’ve learned the basics.
My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like “figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it’s okay if to not do a very good job because you’re still learning.”
I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are:
LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking
Updating the Cause Prioritization wiki (either this one or this one, I’m not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts.
I’m interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive version of itself.
Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they’re supposed to do
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
Ah.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they’re looking for someone to manage them.
Other people are happy to do things on their own, but they don’t have the necessary skills and experience, so they will end up doing something that’s useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
Hmm, it’s not so much the classic rationalist trait of overthinking that I’m concerned about. It’s more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of “practicing thinking”. If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can’t let your brain know that that’s what you’re trying to achieve.
Second, “thinking for real” sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you’ll waste time on producing research which looks nice and impressive and all that, but in the end doesn’t help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of “doing research-assistant-kind-of-work for other people who know what they’re doing” and gradually work their way up to “being one of the people who know what they’re doing”, that would make this work.
You wouldn’t be “practicing thinking”; you could easily convince your brain that you’re actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you’re working on is for real.
And, by the same token, you’d be working on something that (someone believes) needs to be done. And maybe sometimes you’d realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here’s why, etc.—and that’s how you’d gradually grow to be one of the people who know what they’re doing.
So, yeah, proceed on that, I guess.
Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:
I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
So the longterm Earn-to-Give options are:
Actually becoming pretty good at vetting organizations and people
Joining donor lotteries (where you still might have to get good at thinking if you win)
Donating to GiveDirectly (which is maybe actually fine but less exciting)
The world isn’t okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you’re trying to help with.
I think all of these require a general thinking skill that is hard to come by and really needs practice.
(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)
I’ll take your invitation to treat this as an open thread (I’m not going to EAG).
Why not tackle less ambitious goals?
What goals, though?
How about volunteering for an EA org?
Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn’t scale. There are communities and movements that are designed such that there’s lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don’t think EA is one of them.
I’ve heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.
I agree that EA does not have 1000 volunteer jobs. However, here is a list of some possibilities. I know ALLFED could still effectively utilize more volunteers.
My claim is just that “volunteer at an org” is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn’t to say volunteers aren’t valuable, or that many EAs shouldn’t explore that as an option, or that better coordination tools to improve the situation shouldn’t be built.
But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said “huh, it looks like there should be all this free labor available by passionate people, can’t we connect these people with orgs that need volunteers?” and tried to build some kind of tool to help with that, it turned out that most people aren’t actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.
My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.
(Again, not arguing that ALLFED shouldn’t look for volunteers or that EAs shouldn’t volunteer at ALLFED, esp. if my experience doesn’t match yours. I’d encourage anyone reading this who’s looking for projects to give ALLFED volunteering a look.)
The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they’re either young and lacking some core “figure out how to be helpful and actually help” skills, or they’re older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.
I think the *End* of the Middle of the funnel is more of where “volunteer at EA orgs” makes sense. And people in the Middle of the Middle who think they have the “figure out how to be helpful and help” property should do so if they’re self-motivated to. (If they’re not self motivated they’re probably not a good volunteer)