Raemon’s Quick takes
Originally this was a thread for coordinating conversations at EA Global 2019. In the end, only I ended up using the thread for top level comments, and in fact it turned out that a lot of the value was getting to quickly hash out some ideas that I hadn’t felt ready to turn into fully fledged posts.
I’ll probably continue using this for EA-related shortform posts, as a parallel to my lesswrong shortform feed.
Mid-level EA communities, and cultivating the skill of thinking
I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you’ve read all the introductory content, but before you’re ready to tackle anything real ambitious… what should you do, and what should your local EA community encourage people to do?
My sense is that grassroots EA groups default to “discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory.”
I have varying opinions on those things, but even if they were all good ideas… they leave an unsolved problem where there isn’t a very good “bread and butter” activity that you can do repeatedly, that continues to be interesting after you’ve learned the basics.
My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like “figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it’s okay if to not do a very good job because you’re still learning.”
I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are:
LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking
Updating the Cause Prioritization wiki (either this one or this one, I’m not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts.
I’m interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive version of itself.
Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they’re supposed to do
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
Ah.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they’re looking for someone to manage them.
Other people are happy to do things on their own, but they don’t have the necessary skills and experience, so they will end up doing something that’s useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
Hmm, it’s not so much the classic rationalist trait of overthinking that I’m concerned about. It’s more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of “practicing thinking”. If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can’t let your brain know that that’s what you’re trying to achieve.
Second, “thinking for real” sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you’ll waste time on producing research which looks nice and impressive and all that, but in the end doesn’t help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of “doing research-assistant-kind-of-work for other people who know what they’re doing” and gradually work their way up to “being one of the people who know what they’re doing”, that would make this work.
You wouldn’t be “practicing thinking”; you could easily convince your brain that you’re actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you’re working on is for real.
And, by the same token, you’d be working on something that (someone believes) needs to be done. And maybe sometimes you’d realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here’s why, etc.—and that’s how you’d gradually grow to be one of the people who know what they’re doing.
So, yeah, proceed on that, I guess.
Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:
I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
So the longterm Earn-to-Give options are:
Actually becoming pretty good at vetting organizations and people
Joining donor lotteries (where you still might have to get good at thinking if you win)
Donating to GiveDirectly (which is maybe actually fine but less exciting)
The world isn’t okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you’re trying to help with.
I think all of these require a general thinking skill that is hard to come by and really needs practice.
(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)
I’ll take your invitation to treat this as an open thread (I’m not going to EAG).
Why not tackle less ambitious goals?
What goals, though?
How about volunteering for an EA org?
Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn’t scale. There are communities and movements that are designed such that there’s lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don’t think EA is one of them.
I’ve heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.
I agree that EA does not have 1000 volunteer jobs. However, here is a list of some possibilities. I know ALLFED could still effectively utilize more volunteers.
My claim is just that “volunteer at an org” is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn’t to say volunteers aren’t valuable, or that many EAs shouldn’t explore that as an option, or that better coordination tools to improve the situation shouldn’t be built.
But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said “huh, it looks like there should be all this free labor available by passionate people, can’t we connect these people with orgs that need volunteers?” and tried to build some kind of tool to help with that, it turned out that most people aren’t actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.
My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.
(Again, not arguing that ALLFED shouldn’t look for volunteers or that EAs shouldn’t volunteer at ALLFED, esp. if my experience doesn’t match yours. I’d encourage anyone reading this who’s looking for projects to give ALLFED volunteering a look.)
The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they’re either young and lacking some core “figure out how to be helpful and actually help” skills, or they’re older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.
I think the *End* of the Middle of the funnel is more of where “volunteer at EA orgs” makes sense. And people in the Middle of the Middle who think they have the “figure out how to be helpful and help” property should do so if they’re self-motivated to. (If they’re not self motivated they’re probably not a good volunteer)
Competition in the EA Sphere
A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.
By now, I think we have the capacity (both financial, coordinational and human-talent) that that’s less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.
I’m interested in chatting with people about the nuts and bolts of how to apply this.
A few reasons for I think competition is good:
Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
Easier criticism. When there’s only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn’t get done at all. Multiple orgs can allow people to think more freely about the situation.
Competition forces people to shape up a bit. If you’re the only org in town doing a thing, there’s just less pressure to do a good job.
“Healthy” Competition enables certain kinds of integrity. Sort of related to the previous two points. Say you think Cause X is real important, but there’s only one org working on it. If you think Org A isn’t being as high integrity as you’d like, your options are limited (criticize them, publicly or privately, or start your own org, which is very hard. If you think Org A is overall net positive you might risk damaging Cause X by criticizing it. But if there are multiple Orgs A and B working on Cause X, there are less downsides of criticizing it. (Alternate framing is that maybe criticism wouldn’t actually damage cause X but it may still feel that way to a lot of people, so getting a second Org B can be beneficial). Multiple orgs working on a topic makes it easier to reward good behavior.
In particular, if you notice that you’re running the only org in town, and you want to improve you own integrity, you might want to cause there to be more competition. This way, you can help set up a system that creates better incentives for yourself, that remain strong even if you gain power (which may be corrupting in various ways)
There are some special caveats here:
Some types of jobs benefit from concentration.
Communication platforms sort of want to be monopolies so people don’t have to check a million different sites and facebook groups.
Research orgs benefit from having a number of smart people bouncing ideas around.
This means...
See if you can refactor a goal into something that doesn’t actually require a monopoly.
If it’s particularly necessary for a given org to be a monopoly, it should be held to a higher standard – both in terms of operational competence and in terms of integrity.
If you want to challenge a monopoly with a new org, there’s likewise a particular burden to do a good job.
I think “doing a good job” requires a lot of things, but some important things (that should be red flags to at least think about more carefully if they’re lacking) include:
Having strong leadership with a clear vision
Make sure you have a deep understanding of what you’re trying to do, and a clear model of how it’s going to help
Not trying to do a million things at once. I think a major issue facing some orgs is lack of focus.
Probably don’t have this be your first major project. Your first major project should be something it’s okay to fail at. Coordination projects are especially costly to fail at because they make the job harder for the next person.
Invest a lot in communication on your team.
Integrity, Accountability and Group Rationality
I think there are particular reasons that EA should strive, not just to have exceptionally high integrity, but exceptionally high understanding of how integrity works.
Some background reading for my current thoughts includes habryka’s post on Integrity and my own comment here on competition.
What about Paul’s Integrity for Consequentialists?
Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.
Grantmaking and Vetting
I think EA is vetting constrained. It’s likely that I’ll be involved with a new experimental grant allocation process. There are a few key ingredients here that are worth discussing:
Meta Process design. I have some thoughts on designing good grantmaking processes (at the meta level), and I’m interested in hearing from others about what seem like important process elements.
Evaluation approach. I haven’t done (much) evaluation before, and would be interested in talking to people about what makes for good evaluation approaches.
Object level ideas about organizations worth funding. New orgs, old orgs. (Note: I am specifically interested in things that feed into the x-risk ecosystem somehow. Also, in the near future will only be able to consider organizations rather than individuals)
Hey Raemon—I run the EA Grants program at CEA. I’d be happy to chat! Email me at nicole.ross@centreforeffectivealtruism.org if you want to arrange a time.
I won’t be at EAG but I’m in Berkeley for a week or so and would love to chat about this.
I’d offer that whatever you can do to make it possible to iterate on your grantmaking loop quickly will be useful. Perhaps starting with smaller grants on a month or even week cycle, running a few rounds there, and then scaling up. Don’t try and make it near-perfect from the start, instead try and make it something that can become near-perfect because of iterations and improvements.
I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.
Gotcha. I wonder whether it could create substantially more impact from doing over the long term yourself, or setting it up well for someone else to run long term. Obviously I have no context and your goals on the project but I’ve seen things where people do a short term project aiming for impact creation and where in the end they feel that they could’ve created much more impact by doing the thing in a more ongoing manner. So this note may or may not be relevant depending on the project and your goals :)
Notes from a “mini talk” I gave to a couple people at EA Global.
Local EA groups (and orgs, for that matter) need leadership, and membranes.
Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.
Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it’s worth people’s effort to overcome the barriers to entry, and/or maintain that barrier.
Membranes
A membrane is a semi-permeable barrier that things can enter and leave, but it’s a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.
An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge.
(I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members)
What happens inside the membrane?
First, you meet some basic standards for intelligence, good communication, etc. The basics you need in order to accomplish anything on purpose.
As noted elsewhere, I think EA needs to cultivate the skill of thinking (as well as gaining agency). There are a few ways to go about this, but all of them require some amount of “willing to put in extra effort and work.” Having a space where people have the expectation that everyone there is interested in putting that effort is helpful for motivation and persistence.
In time, you can develop conversation norms that foster better-than-average thinking and communication. (i.e. make sure that admitting you were wrong is rewarded rather than punished)
Membranes can work via two mechanisms:
Be more careful about who you let in, in the first place
Be willing to invest effort in giving feedback, or being willing to expel people from the group.
The first option is easier. Giving feedback and expelling people is quite costly, and painful both for the person being expelled from a group (who may have friends and roots there), as well as the person doing the expelling (which may involve a stressful fight with people second-guessing you).
If you’re much more careful about who you let in, an ounce of prevention can be more valuable than a pound of cure.
On the other hand, if you put up lots of barriers, you may find your community stagnating. There may also be false positives of “so-and-so seemed not super promising” but if you’d given them a chance to grow it would have been fine.