What to do with people?

I would like to offer one possible answer to the ongoing discussion in the effective altruism community, centered around the question about scaleable use of the people (“Task Y”).

The following part of the 80000h podcast with Nick Beckstead is a succinct introduction of the problem (as emphasized by alxjrl)

Nick Beckstead: (… ) I guess, the way I see it right now is this community doesn’t have currently a scalable use of a lot of people. There’s some groups that have found efficient scalable uses of a lot of people, and they’re using them in different ways.
For example, if you look at something like Teach for America, they identified an area where, “Man, we could really use tons and tons of talented people. We’ll train them up in a specific problem, improving the US education system. Then, we’ll get tons of them to do that. Various of them will keep working on that. Some of them will understand the problems the US education system faces, and fix some of its policy aspects.” That’s very much a scalable use of people. It’s a very clear instruction, and a way that there’s an obvious role for everyone.
I think, the Effective Altruist Community doesn’t have a scalable use of a lot of its highest value … There’s not really a scalable way to accomplish a lot of these highest valued objectives that’s standardised like that. The closest thing we have to that right now is you can earn to give and you can donate to any of the causes that are most favored by the Effective Altruist Community. I would feel like the mass movement version of it would be more compelling if we’d have in mind a really efficient and valuable scalable use of people, which I think is something we’ve figured out less.
I guess what I would say is right now, I think we should figure out how to productively use all of the people who are interested in doing as much good as they can, and focus on filling a lot of higher value roles that we can think of that aren’t always so standardised or something. We don’t need 2000 people to be working on AI strategy, or should be working on technical AI safety exactly. I would focus more on figuring out how we can best use the people that we have right now.

Relevant posts and discussions on the topic are under several posts on the forum:

Hierarchical networked structure

The answer I’d like to offer is abstract, but general and scalable. The answer is: “build a hierarchical networked structure”, for lack of better name. It is best understood as a mild shift of attitude. A concept on a similar level of generality as “prioritization” or “crucial considerations”.

The hierarchical structure can be in physical space, functional space or research space.

An example of a hierarchy in physical space could be the structure of local effective altruism groups: it is hard to coordinate an unstructured group of 10 thousands people. It is less hard, but still difficult to coordinate a structure of 200 “local groups” with widely different sizes, cultures and memberships. The optimal solution likely is to coordinate something like 5-25 “regional” coordinators/​ hub leaders, who then coordinate with the local groups. The underlying theoretical reasons for such a structure are simple considerations like “network distance” or “bandwidth constraints”.

A hierarchy in functional space could be for example a hierarchy of organizations and projects providing people career advice. It is difficult to give personalized career advice to tens of thousands of people as a small and lean organization. Scalable hierarchical version of career advice may look like this: based on general request, a student considering future study plans is redirected to e.g. Effective Thesis, specialized on the problem. Further, the student is connected with a specialist coach with object-level knowledge. The hierarchical structure could in my guess scale approximately 100x more than a single organization focusing just on picking the few most impactful people.

A hierarchy in research space could be a structure of groups working on various sub-problems and sub-questions. For example, part of the answers to the question “how to influence the long-term future” depend on the extent to which the world is world, or random, or predictable. It would be great to have a group of people working on this. There are thousands of relevant questions and tens of thousands of sub-questions which should be studied from an effective altruist perspectives.

In general, hierarchical networked structures are the way how complex functional systems are organized and can scale. Closely related concept is “modular decomposition”.

Why networked? I want to point toward the network properties of structures. It is possible to think about some crucial properties of complex system using concepts from network science. E.g. average and maximal distance between nodes in the network, “bandwidth” of links, mechanisms for new link creation, and similar.

Why structures? To put structural aspects in focus. The word hierarchy has many other meanings or connotations like status hierarchy or top-down, command-and-control style of management, which I do not want to recommend.

How is this different

It may be helpful to contrast creating hierarchical structure with other organizational principles.

Effective altruism has in its heart a principle of prioritization: where pure hierarchization tells you to decompose the whole into subparts, and assign someone to deal with each of the parts, pure prioritization tells you select just the best action, and assign just the best person to do it. Taken to the extreme, prioritization leads to recipes like “find the brightest prodigy, make him or her work on the most important problems in AI safety”. Taken to the extreme, hierarchization leads people to work on obscure questions.

Do not get me wrong: prioritization is a great principle, but I would suggest effective altruism should use hierarchization more than it does.

Another competing (self-)organizational principle is homophily, that is, people’s tendency to form ties with people who are similar to themselves. Where hierarchization leads to different levels of specialization, homophily leads to homogenous clusters of people. Starting with several Oxford utilitarian philosophers, you attract more Oxford utilitarian philosophers (so called founder’s effect). Good ML researchers are more likely to know other good AI researchers. People critical of EA’s organizational landscape will more likely talk to other people dissatisfied with the same problems.

Homophily is in general neither good nor bad—in some ways, it provides immense benefits to the movement (like: we want smart altruistic people). But from a structural perspective, it also has significant drawbacks.

Taken together, prioritisation and homophily lead to problems. For example, let’s suppose there is a pool of several hundreds EAs, who are in some ways quite similar—elite university education, good analytic thinkers, concerned about the long-term future, looking mainly for high-impactful jobs, without much practical experience in project management, technical disciplines, grant-making, and many other more specialized skills. All of them do the prioritization of their career options, and all of them apply to the research analyst role at OpenPhil. At the same time, despite the pool of talent, organizations have trouble finding people who would fit in specific roles, and there is always much more work than people.

I hope you have the general direction now. If not, to get more of the background this is related:

https://​​en.wikipedia.org/​​wiki/​​Hierarchy#Examples_of_other_applications

https://​​en.wikipedia.org/​​wiki/​​Hierarchical_network_model

https://​​en.wikipedia.org/​​wiki/​​Efficiency_(network_science)

In practice

While it may be more difficult to turn an answer in the form “go and build hierarchical networked structure” into action, than, let’s say “go and teach”, I’m optimistic that the current effective altruism community is competent enough to be able to use such high-level principles. Moreover, it is not necessary for everyone to work on “structure building”—many people would just “fit into the structure”.

I would expect that a lot would be achievable just by a change of attitude in this direction, both among the talented EAs, and among the movement leaders.

By a rough estimate, for some EA jobs, literally years of work are spend in aggregate by the talented people just competing for the positions. I’m confident that similar effort directed toward figuring out what hierarchical structures we need would lead to at least some good plans, and thinking about where one can fit in the structure could lead more people to do useful work.

Note: this requires actual, real, intellectual work. There aren’t any ready-made recipes, or lists of what structures to create, network maps, or similar resources.

What we already have and what we should do

To some extent, hierarchies emerge naturally. From the above described examples, local effective altruism group structure would likely develop toward 2-layered hierarchy even without much planning. In the research domain, we can see gradual development of more specialized sub-groups, such as the Center for the Governance of AI within FHI.

What I’m trying to say is that hierarchical structure may be grown more deliberately, and can productively use people.

How is this decision relevant

If the above still sounds very theoretical, I’ll try to illustrate the possible shift of attitude on several examples.

Let’s say you are in the situation of hundreds of EAs applying for jobs—with good university education, good analytical skills, focus on the long-term future, looking mainly for high-impact jobs. Looking on your situation mainly with the “prioritization” attitude, you can easily arrive at the conclusion that some of your best career options are, for example, research analyst job in OpenPhil, research-management roles in FHI, CHAI, BERI, or various positions in CEA. Maybe less attractive are jobs in, for example, GiveWell.

What happens if you take your “build hierarchical networked structure” hat? You pick, for example, “effective altruism movement building” as an area/​task (it is likely somewhere near the top of prioritization). In the next step, you attempt to do the hierarchical “decomposition” of the area. You can get started just by looking on past and present internal structures of CEA, with sub-groups or sub-tasks like Events, Grants or Groups. Each of these “parts” usually needs all of theoretical work, research and development, and execution and ops. After a bit of looking around, you may find, for example, there are just a few people systematically trying to create amazing events. There are opportunities to practice: CFAR is often open to ops volunteers, EAG as well, you may run an event for your group, or create some new event which would be useful to have for the broader community. All of this is impactful work, if not impactful job. Or, you may find out there isn’t anyone around exactly working on research of EA events. By that, I mean questions like: “How do events lead to impact? How we can measure it? Are there some characteristic patterns in how people meet each other? What are the relevant non-EA reference classes for various EA events?” When you try to work on this you may find out it depends on specific skills, or requires contact with people working on events, so it may be less tractable—but it’s still worth trying. I would also expect good work on this topic to have impact, attract attention, and possibly funding.

While I picked up examples from the “EA movement building” cause area which can ultimately lead to working in effective altruism professionally, that’s not the point. In different cause areas the build hierarchical networked structure attitude can lead to work doesn’t have the EA label in the name at all, yet is still quite impactful. We need EA experts and professionals in many fields. Also, often the most impactful action may be not doing something directly, but creating a structure, or optimizing some network. Short example: x-risk seems to be a neglected consideration in most of the economics literature. One good option could be to pursue an academic career, and work on the topic. Possibly an even better option is to somehow link researchers in academia who are already thinking about these topics in different institutions, e.g. by organizing a seminar.

How can the shift look like for someone in central positions? One change could be described as matching “2nd best options” and “3rd best options” with people. Delegating. Supporting growth of more specialized efforts.

How a good practice may look like: the Center for the Governance of AI has an extensive research agenda. Obviously the core researchers in the institution should focus on top priority problems, but as even some of the sub-problems are still quite important, it may make sense to encourage others to work on them. How may this happen in practice? For example, via the research affiliates program, having AI Safety Camp participant work on the topics.

Another example: let’s say you are 80.000h, an effective altruist organization trying to help people have impact with their career. You prioritize focusing mainly on moving ML PhDs to AI safety, and impressive policy people to the governance of AI. At the same time, you are running the currently largest EA mass outreach project. The unfortunate result is that almost all the people interested in having impactful careers have to rely just on the website, and only a tiny fraction gets some personal support.

What might a hierarchical networked structure approach look like? For example, distilling the coaching knowledge, and creating a guide for professional EA groups organizers to provide coaching to a less exclusive group of effective altruists. There are now dozens of professional EA community builders, EA career coaching is part of their daily jobs, yet as far as there is more knowledge than on the website, they are mostly left to rediscover it.

How can the shift look for someone working in the funding part of the ecosystem? One obvious way is to encourage re-granting. This is to some extent happening: obviously it likely does not make sense for OpenPhil to evaluate $10000 grant applications, so such projects are a better fit for EA Grants. Yet there are small things which can be impactful which are so small that it does not make sense to evaluate them even as EA grants, and could be supported e.g. by community builders in larger EA groups.

Another opportunity for networked hierarchical structures is in project evaluations and talent scouting. Instead of relying mainly on informal personal networks of grant evaluators, there could be more formal structures of trusted experts.

Possible problems

It is possible that some important tasks are not decomposable in a way which would be good for delegating them to hierarchical structures.

  • While this problem is a question of active theoretical research, it seems clear that many important practical problems are decomposable.

Hierarchical structures composed of a large number of people have significant inertia, and when they gain momentum, it may be hard to steer them. (Think about bureaucracies.)

  • I agree this is true, but in my view it would be good to have some parts of the effective altruism movement which have more of this property. It seems to me in the current state too many EAs are too “fluid”, willing to often change plans, based on the latest prioritization results or 80000h posts (e.g. someone switching from research career to EtG, then switching back to study of x-risks, then considering ops roles, etc.)

  • Also I would consider it a good result if the “trail” behind the core of effective altruism movement was dotted with structures and organizations working on highly impactful problems, even if the problems are no longer on the exactly first place in current prioritization.

It is difficult to create such structures and very few people have the relevant skills.

  • I’m generally sceptical of such arguments. The effective altruism movement managed to gather an impressively competent group of people, and many of the “new” EAs do not seem to be less competent than “old” EAs who built the existing structures. For example, I would expect the current community to contain a number of people generally as competent as Robert Wiblin or Nick Beckstead, which makes me optimistic about the structures they would create.

Remarks and discussion

The previous is a rough sketch, pointing to one possible direction how more people can do as much good as possible. It is not intended as suggestion for scaling effective altruism to truly mass proportions, not speaking of hundreds of millions people. But that is also not the situation we are in: the reality is currently effective altruism does not know how to utilize even thousands people, apart from earning to give. My hope is that shift toward building hierarchical network structures would help.

Big weakness of this set of ideas is it is likely not memetically fit in the present form. Building hierarchical network structure is a bad name. Also, this post isn’t a nice one paragraph introduction. Just finding a better name could be big improvement (for various reasons, it is also hard for me—I would really appreciate suggestions).

I would like to thank many EAs for comments and discussions on the topic.