I suspect the goal here is less to deconfuse current EAs and more to make it easier to explain things to newcomers who don’t have any context.
(It also seems like good practice to me for people in leadership positions to keep people up to date about how they’re conceptualizing their thinking)
Quick note that if you set All Posts to “sort by new” instead of “sort by Daily” there’ll be 50 posts. (The Daily view is a bit weird because it varies a lot depending on forum traffic that week)
I don’t have much to contribute but I appreciated this writeup – I like it when EAs explore cause areas like this.
For the record I’m someone who works on the forum and thought the OP was expressed pretty reasonably.
Strong upvoted mostly to make it easier to find this comment.
The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they’re either young and lacking some core “figure out how to be helpful and actually help” skills, or they’re older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.
I think the *End* of the Middle of the funnel is more of where “volunteer at EA orgs” makes sense. And people in the Middle of the Middle who think they have the “figure out how to be helpful and help” property should do so if they’re self-motivated to. (If they’re not self motivated they’re probably not a good volunteer)
My claim is just that “volunteer at an org” is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn’t to say volunteers aren’t valuable, or that many EAs shouldn’t explore that as an option, or that better coordination tools to improve the situation shouldn’t be built.
But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said “huh, it looks like there should be all this free labor available by passionate people, can’t we connect these people with orgs that need volunteers?” and tried to build some kind of tool to help with that, it turned out that most people aren’t actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.
My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.
(Again, not arguing that ALLFED shouldn’t look for volunteers or that EAs shouldn’t volunteer at ALLFED, esp. if my experience doesn’t match yours. I’d encourage anyone reading this who’s looking for projects to give ALLFED volunteering a look.)
A membrane is a semi-permeable barrier that things can enter and leave, but it’s a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.
An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge.
(I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members)
What happens inside the membrane?
First, you meet some basic standards for intelligence, good communication, etc. The basics you need in order to accomplish anything on purpose.
As noted elsewhere, I think EA needs to cultivate the skill of thinking (as well as gaining agency). There are a few ways to go about this, but all of them require some amount of “willing to put in extra effort and work.” Having a space where people have the expectation that everyone there is interested in putting that effort is helpful for motivation and persistence.
In time, you can develop conversation norms that foster better-than-average thinking and communication. (i.e. make sure that admitting you were wrong is rewarded rather than punished)
Membranes can work via two mechanisms:
Be more careful about who you let in, in the first place
Be willing to invest effort in giving feedback, or being willing to expel people from the group.
The first option is easier. Giving feedback and expelling people is quite costly, and painful both for the person being expelled from a group (who may have friends and roots there), as well as the person doing the expelling (which may involve a stressful fight with people second-guessing you).
If you’re much more careful about who you let in, an ounce of prevention can be more valuable than a pound of cure.
On the other hand, if you put up lots of barriers, you may find your community stagnating. There may also be false positives of “so-and-so seemed not super promising” but if you’d given them a chance to grow it would have been fine.
Notes from a “mini talk” I gave to a couple people at EA Global.
Local EA groups (and orgs, for that matter) need leadership, and membranes.
Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.
Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it’s worth people’s effort to overcome the barriers to entry, and/or maintain that barrier.
Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn’t scale. There are communities and movements that are designed such that there’s lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don’t think EA is one of them.
I’ve heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.
Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.
I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
What goals, though?
I didn’t write a top level post but I sketched out some of the relevant background ideas here. (I’m not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)
Integrity, Accountability and Group Rationality
I think there are particular reasons that EA should strive, not just to have exceptionally high integrity, but exceptionally high understanding of how integrity works.
Some background reading for my current thoughts includes habryka’s post on Integrity and my own comment here on competition.
A few reasons for I think competition is good:
Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
Easier criticism. When there’s only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn’t get done at all. Multiple orgs can allow people to think more freely about the situation.
Competition forces people to shape up a bit. If you’re the only org in town doing a thing, there’s just less pressure to do a good job.
“Healthy” Competition enables certain kinds of integrity. Sort of related to the previous two points. Say you think Cause X is real important, but there’s only one org working on it. If you think Org A isn’t being as high integrity as you’d like, your options are limited (criticize them, publicly or privately, or start your own org, which is very hard. If you think Org A is overall net positive you might risk damaging Cause X by criticizing it. But if there are multiple Orgs A and B working on Cause X, there are less downsides of criticizing it. (Alternate framing is that maybe criticism wouldn’t actually damage cause X but it may still feel that way to a lot of people, so getting a second Org B can be beneficial). Multiple orgs working on a topic makes it easier to reward good behavior.
In particular, if you notice that you’re running the only org in town, and you want to improve you own integrity, you might want to cause there to be more competition. This way, you can help set up a system that creates better incentives for yourself, that remain strong even if you gain power (which may be corrupting in various ways)
There are some special caveats here:
Some types of jobs benefit from concentration.
Communication platforms sort of want to be monopolies so people don’t have to check a million different sites and facebook groups.
Research orgs benefit from having a number of smart people bouncing ideas around.
See if you can refactor a goal into something that doesn’t actually require a monopoly.
If it’s particularly necessary for a given org to be a monopoly, it should be held to a higher standard – both in terms of operational competence and in terms of integrity.
If you want to challenge a monopoly with a new org, there’s likewise a particular burden to do a good job.
I think “doing a good job” requires a lot of things, but some important things (that should be red flags to at least think about more carefully if they’re lacking) include:
Having strong leadership with a clear vision
Make sure you have a deep understanding of what you’re trying to do, and a clear model of how it’s going to help
Not trying to do a million things at once. I think a major issue facing some orgs is lack of focus.
Probably don’t have this be your first major project. Your first major project should be something it’s okay to fail at. Coordination projects are especially costly to fail at because they make the job harder for the next person.
Invest a lot in communication on your team.
Competition in the EA Sphere
A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.
By now, I think we have the capacity (both financial, coordinational and human-talent) that that’s less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.
I’m interested in chatting with people about the nuts and bolts of how to apply this.
Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:
I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
So the longterm Earn-to-Give options are:
Actually becoming pretty good at vetting organizations and people
Joining donor lotteries (where you still might have to get good at thinking if you win)
Donating to GiveDirectly (which is maybe actually fine but less exciting)
The world isn’t okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you’re trying to help with.
I think all of these require a general thinking skill that is hard to come by and really needs practice.
(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)