I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
What goals, though?
I didn’t write a top level post but I sketched out some of the relevant background ideas here. (I’m not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)
Integrity, Accountability and Group Rationality
I think there are particular reasons that EA should strive, not just to have exceptionally high integrity, but exceptionally high understanding of how integrity works.
Some background reading for my current thoughts includes habryka’s post on Integrity and my own comment here on competition.
A few reasons for I think competition is good:
Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
Easier criticism. When there’s only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn’t get done at all. Multiple orgs can allow people to think more freely about the situation.
Competition forces people to shape up a bit. If you’re the only org in town doing a thing, there’s just less pressure to do a good job.
“Healthy” Competition enables certain kinds of integrity. Sort of related to the previous two points. Say you think Cause X is real important, but there’s only one org working on it. If you think Org X isn’t being as high integrity as you’d like, you don’t really have an option (other than abandoning Cause X, or doing an extremely effortful thing of starting your own org). Multiple orgs working on a topic makes it easier to reward good behavior.
In particular, if you notice that you’re running the only org in town, and you want to improve you own integrity, you might want to cause there to be more competition. This way, you can help set up a system that creates better incentives for yourself, that remain strong even if you gain power (which may be corrupting in various ways)
There are some special caveats here:
Some types of jobs benefit from concentration.
Communication platforms sort of want to be monopolies so people don’t have to check a million different sites and facebook groups.
Research orgs benefit from having a number of smart people bouncing ideas around.
See if you can refactor a goal into something that doesn’t actually require a monopoly.
If it’s particularly necessary for a given org to be a monopoly, it should be held to a higher standard – both in terms of operational competence and in terms of integrity.
If you want to challenge a monopoly with a new org, there’s likewise a particular burden to do a good job.
I think “doing a good job” requires a lot of things, but some important things (that should be red flags to at least think about more carefully if they’re lacking) include:
Having strong leadership with a clear vision
Make sure you have a deep understanding of what you’re trying to do, and a clear model of how it’s going to help
Not trying to do a million things at once. I think a major issue facing some orgs is lack of focus.
Probably don’t have this be your first major project. Your first major project should be something it’s okay to fail at. Coordination projects are especially costly to fail at because they make the job harder for the next person.
Invest a lot in communication on your team.
Competition in the EA Sphere
A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.
By now, I think we have the capacity (both financial, coordinational and human-talent) that that’s less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.
I’m interested in chatting with people about the nuts and bolts of how to apply this.
Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:
I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
So the longterm Earn-to-Give options are:
Actually becoming pretty good at vetting organizations and people
Joining donor lotteries (where you still might have to get good at thinking if you win)
Donating to GiveDirectly (which is maybe actually fine but less exciting)
The world isn’t okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you’re trying to help with.
I think all of these require a general thinking skill that is hard to come by and really needs practice.
(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)
Mid-level EA communities, and cultivating the skill of thinking
I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you’ve read all the introductory content, but before you’re ready to tackle anything real ambitious… what should you do, and what should your local EA community encourage people to do?
My sense is that grassroots EA groups default to “discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory.”
I have varying opinions on those things, but even if they were all good ideas… they leave an unsolved problem where there isn’t a very good “bread and butter” activity that you can do repeatedly, that continues to be interesting after you’ve learned the basics.
My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like “figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it’s okay if to not do a very good job because you’re still learning.”
I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are:
LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking
Updating the Cause Prioritization wiki (either this one or this one, I’m not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts.
I’m interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive version of itself.
Grantmaking and Vetting
I think EA is vetting constrained. It’s likely that I’ll be involved with a new experimental grant allocation process. There are a few key ingredients here that are worth discussing:
Meta Process design. I have some thoughts on designing good grantmaking processes (at the meta level), and I’m interested in hearing from others about what seem like important process elements.
Evaluation approach. I haven’t done (much) evaluation before, and would be interested in talking to people about what makes for good evaluation approaches.
Object level ideas about organizations worth funding. New orgs, old orgs. (Note: I am specifically interested in things that feed into the x-risk ecosystem somehow. Also, in the near future will only be able to consider organizations rather than individuals)
I think if you’ve read Ben’s writings, it’s obvious that the prime driver is about epistemic health.
Also worried about the overall epistemic health of EA – if it’s reliably misleading people, it’s much less useful as a source of information.
I’m fairly confident, based on reading other stuff Ben Hoffman has written, that this post has much less to do with Ben wanting to justify a rejection of EA style giving, and and much more to do with Ben being frustrated by what he sees as bad arguments/reasoning/deception in the EA sphere.
I have more thoughts but it’s sufficiently off topic for this post that I’ll probably start a new thread about it.
Meta note: I feel a vague sense of doom about a lot of questions on the EA forum (contrasted with LessWrong), which is that questions end up focused on “how should EA overall coordinate”, “what should be the top causes” and “what should be part of the EA narrative?”
I worry about this because I think it’s harder to think clearly about narratives and coordination mechanisms that it is about object level facts. I also have a sense that the questions are often framed in a way that is trying to tell me the answer rather than help me figure things out.
And often I think the questions could be reframed as empirical questions without the “should” and “we” frames, which a) I think would be easier to reason about, b) would remain approximately as useful for helping people to coordinate.
“Is X a top cause area?” is a sort of weird question. The whole point of EA is that you need to prioritize, and there are only ever going to be a smallish number of “top causes”. So the answer to any given “Is this Cause X” is going to be “probably not.”
But, it’s still useful to curiously explore cause areas that are underexplored. “What are the tractable interventions of [this particular cause]?” is a question that you can explore without making it about whether it’s one of the top causes overall.
FYI Critch in particular is pretty time constrained. I’m not sure who the best person to reach out to currently who has the knowledge and also time to do a good job helping. (I’ll ask around, meanwhile the “apply to MIRI” suggestion is what I got)
Buck Shlegeris writes (on FB):
I think that every EA who is a software engineer should apply to work at MIRI, if you can imagine wanting to work at MIRI.
It’s probably better for you to not worry about whether you’re wasting our time. The first step in our interview is the Triplebyte quiz, which I think is pretty good at figuring out who I should spend more time talking to. And I think EAs are good programmers at high enough rates that it seems worth it to me to encourage you to apply.
There is great honor in trying and failing to get a direct work job. I feel fondness in my heart towards all the random people who email me asking for my advice on becoming an AI safety researcher, even though I’m not fast at replying to their emails and most are unlikely to be able to contribute much to AI safety research.
You should tell this to all your software engineer friends too.
EDIT: Sorry, I should have clarified that I meant that you should do this if you’re not already doing something else that’s in your opinion comparably valuable. I wrote this in response to a lot of people not applying to MIRI out of respect for our time or something; I think there are good places to work that aren’t MIRI, obviously.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts.
Just wanted to make a quick note that I also felt the “overview” style posts aren’t very useful to me (since they mostly encapsulate things I already had thought about)
At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
I basically agree with this. I have a bunch of thoughts about healthy competition in the EA sphere I’ve been struggling to write up.