# RyanCarey’s Shortform

• A case of precocious policy influence, and my pitch for more research on how to get a top policy job.

Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year − 2021 - she was appointed by Biden.

The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government.

I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. Jason Matheny’s success is arguably a related example.

This also suggests a possible research agenda surrounding how people get influential jobs in general. For many talented young EAs, it would be very useful to know. Similar to how Wiblin ran some numbers in 2015 on the chances at a seat in congress given a background at Yale Law, we could ask about the whitehouse, external political appointments (such as FTC commissioner) and the judiciary. Also, this ought to be quite tractable: all the names are in public, e.g. here [Trump years] and here [Obama years], most of the CVs are in the public domain—it just needs doing.

• What’s especially interesting is that the one article that kick-started her career was, by truth-orientated standards, quite poor. For example, she suggested that Amazon was able to charge unprofitably low prices by selling equity/​debt to raise more cash—but you only have to look at Amazon’s accounts to see that they have been almost entirely self-financing for a long time. This is because Amazon has actually been cashflow positive, in contrast to the impression you would get from Khan’s piece. (More detail on this and other problems here).

Depressingly this suggests to me that a good strategy for gaining political power is to pick a growing, popular movement, become an extreme advocate of it, and trust that people will simply ignore the logical problems with the position.

• My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It’s possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.

• Agreed that in her outlying case, most of what she’s done is tap into a political movement in ways we’d prefer not to. But is that true for high-performers generally? I’d hypothesise that elite academic credentials + policy-relevant research + willingness to be political, is enough to get people into elite political positions, maybe a tier lower than hers, a decade later, but it’d be worth knowing how all the variables in these different cases contribute.

• Translating EA into Republican. There are dozens of EAs in US party politics, Vox, the Obama admin, Google, and Facebook. Hardly in the Republican party, working for WSJ, appointed for Trump, or working for Palantir. A dozen community groups in places like NYC, SF, Seattle, Berkeley, Stanford, Harvard, Yale. But none in Dallas, Phoenix, Miami, the US Naval Laboratory, the Westpoint Military Academy, etc—the libertarian-leaning GMU economics department being a sole possible exception.

This is despite the fact that people passing through military academies would be disproportionately more likely to work on technological dangers in the military and public service, while the ease of competitiveness is less than more liberal colleges.

I’m coming to the view that similarly to the serious effort to rework EA ideas to align with Chinese politics and culture, we need to translate EA into Republican, and that this should be a multi-year, multi-person project.

• I thought this Astral Codex Ten post, explaining how the GOP could benefit from integrating some EA-aligned ideas like prediction markets into its platform, was really interesting. Karl Rove retweeted it here. I don’t know how well an anti-classism message would align with EA in its current form though, if Habryka is right that EA is currently “too prestige-seeking”.

• I’ve thought about this a few times since you wrote it, and I’d like to see what others think. Would you consider making it a top-level post (with or without any additional detail)?

• Maybe shortform posts could graduate to being normal posts if they get some number of upvotes?

• When someone writes a shortform post, they often intend for it to be less visible. I don’t want an automated feature that will often go against the intentions of a post’s author.

• Do you think they intend for less visibility or to signal it’s a lower standard?

• Could be one, the other, neither, or both. But my point is that an automated feature that removes Shortform status erases those differences.

• Good point.

• Affector & Effector Roles as Task Y?

Longtermist EA seems relatively strong at thinking about how to do good, and raising funds for doing so, but relatively weak in affector organs, that tell us what’s going on in the world, and effector organs that influence the world. Three examples of ways that EAs can actually influence behaviour are:

- working in & advising US nat sec

- working in UK & EU governments, in regulation

- working in & advising AI companies

But I expect this is not enough, and our (a/​e)ffector organs are bottlenecking our impact. To be clear, it’s not that these roles aren’t mentally stimulating—they are. It’s just that their impact lies primarily in implementing ideas, and uncovering practical considerations, rather than in an Ivory tower’s pure, deep thinking.

The world is quickly becoming polarised between US and China, and this means that certain (a/​e)ffector organs may be even more neglected than the others. We may want to promote: i) work as a diplomat ii) working at diplomat-adjacent think tanks, such as the Asia Society, iii) working at relevant UN bodies, relating to disarmament and bioweapon control, iv) working at UN-adjacent bodies that seek to pressure disarmament etc. These roles often reside in large entities that can accept hundreds or thousands of new staff at a wide range of skill levels, and so perhaps many people who are currently “earning to give” should move into these “affector” or “effector” roles (as well as those mentioned above, in other relevant parts of national governments). I’m also curious whether 80,000 Hours has considered diplomatic roles—I couldn’t find much on a cursory search.

• This framing is not quite right, because it implies that there’s a clean division of labour between thinkers and doers. A better claim would be: “we have a bunch of thinkers, now we need a bunch of thinker-doers”.

• There’s a new center in the Department of State, dedicated to the diplomacy surrounding new and emerging tech. This seems like great place for Americans to go and work, if they’re interested in arms control in relation to AI and emerging technology.

Confusingly, it’s called the “Bureau of Cyberspace Security and Emerging Technologies (CSET)”. So we now have to distinguish the State CSET from the Georgetown one—the “Centre for Security and Emerging Technology”.

• Thanks for this.

I’ve also been thinking about similar things—e.g. about how there might be a lot of useful things EAs could do in diplomatic roles, and how an 80k career profile on diplomatic roles could be useful. This has partly been sparked by thinking about nuclear risk.

Hopefully in the coming months I’ll write up some relevant thoughts of my own on this and talk to some people. And this shortform post has given me a little extra boost of inclination to do so.

• [Maybe a bit of a tangent]

A Brookings article argues that (among other things):

1. A key priority for the Biden administration should be to rebuild the State Department’s arms control workforce, as its current workforce is ageing and there have been struggles with recruiting and retaining younger talent

2. Another key priority should be “responding to the growing anti-satellite threat to U.S. and allies’ space systems”. This should be tackled by, among other things:

• “tak[ing] steps to revitalize America’s space security diplomacy”

• “consider[ing] ways to expand space security consultations with allies and partners, and promote norms of behavior that can advance the security and sustainability of the outer space environment”

• (Note: It’s not totally clear to me whether this part of the article is solely about anti-satellite threats or about a broader range of space-related issues.)

This updated me a little bit further towards thinking it might be useful:

• for more EAs to go into diplomacy and/​or arms control

• for EAs to do more to support other efforts to improve diplomacy and/​or arms control (e.g., via directing funding to good existing work on these fronts)

Here’s the part of the article which is most relevant to point 1:

The State Department’s arms control workforce has been under stress for some time due to problems associated with an aging staff and the inability to effectively recruit and retain younger talent. For example, a 2014 State Department Inspector General report on the Bureau of Arms Control, Verification, and Compliance states: “Forty-eight percent of the bureau’s Civil Service employees will be eligible to retire in the next 5 years, the second-highest percentage in the Department of State … Absent a plan to improve professional development and succession planning for the next generation of arms control experts, the bureau is at risk of losing national security expertise vital to its mission.”

Though many of the challenges associated with the arms control workforce pre-date the Trump administration, according to press reports, these trends have accelerated under its watch. As a result, the Biden administration will inherit an arms control workforce that has been hollowed out. A key priority for the incoming team must be to rebuild this workforce. Luckily, the Under Secretary of State for Arms Control and International Security has the authority under the Arms Control and Disarmament Act to hire technical arms control experts through an expedited process. In the near-term, the State Department should take advantage of this and other existing hiring authorities to help rebuild the arms control workforce. Over the longer term, it should work with Congress to determine whether new hiring authorities would help grow and maintain the arms control workforce.

• EA Highschool Outreach Org (see Catherine’s and Buck’s posts, my comment on EA teachers)

Running a literal school would be awesome, but seems too consuming of time and organisational resources to do right now.Assuming we did want to do that eventually, what could be a suitable smaller step? Founding an organisation with vetted staff, working full-time on promoting analytical and altruistic thinking to high-schoolers—professionalising in this way increases the safety and reputability of these programs. Its activities should be targeted to top schools, and could include, in increasing order of duration:

1. One-off outreach talks at top schools

2. Summer programs in more countries, and in more subjects, and with more of an altruistic bent (i.e. variations on SPARC and Eurosparc)

3. Recurring classes in things like philosophy, econ, and EA. Teaching by visitors could be arranged by liaising to school teachers, similarly to how external teachers are brought in for chess classes.

4. After-school, or weekend, programs for interested students

I’m not confident this would go well, given the various reports from Catherine’s recap and Buck’s further theorising. But targeting the right students, and bringing the right speakers, gives it a chance of success. If you get to (3-4), all is going well, and the number of interested teachers and students are rising, it would be very natural for the org to scale into a school proper.

• High impact teachers? (Teaching as Task Y). More recent thoughts at EA Highschool Outreach Org. See also An EA teaching pathway?

The typical view, here, on high-school outreach seems to be that:

1. High-school outreach has been somewhat effective, uncovering one highly capable do-gooder per 10-100 exceptional students.

2. But people aren’t treating it with the requisite degree of sensitivity: they don’t think enough about what parents think, they talk about “converting people”, and there have been bad events of unprofessional behaviour.

So I think high-school outreach should be done, but done differently. Involving some teachers would be useful step toward professionalisation (separating the outreach from the rationalist community would be another).

But (1) also suggests that teaching at a school for gifted children could be a priority activity in itself. The argument is if a teacher can inspire a bright student to try to do good in their career, then the student might be manifold more effective than the teacher themselves would have been, if they had tried to work directly on the world’s problems. And students at such schools are exceptional enough (Z>2) that this could happen many times throughout a teacher’s career.

This does not mean that teaching is the best way to reach talented do-gooders. But it doesn’t have to be, because it could attract some EAs who who wouldn’t suit outreach paths. It leads to stable and respected employment, involves interpersonal contact that can be meaningful, and so on (at least, some interactions with teachers were quite meaningful to me, well before EA entered my picture).

I’ve said that teachers could help professionalise summer schools, and inspire students. I also think that a new high school for gifted altruists could be a high-priority. It could gather talented altruistic students together, so that they have more social support, and better meet their curricular needs (e.g. econ, programming, philosophy, research). I expect that such a school could attract great talent. It would be staffed with pretty talented at knowledgeable teachers. It would be advised by some professors at top schools. If necessary, by funding scholarships, it could grow its student based arbitrarily. Maybe a really promising project.

• A step that I think would be good to see even sooner is any professor at a top school getting in a habit of giving talks at gifted high-schools. At some point, it might be worth a few professors each giving dozens of talks per year, although it wouldn’t have to start that way.

Edit: or maybe just people with “cool” jobs. Poker players? Athletes?

Suppose longtermism already has some presence in SF, Oxford, DC, London, Toronto, Melbourne, Boston, New York, and is already trying to boost its presence in the EU (especially Brussels, Paris, Berlin), UN (NYC, Geneva), and China (Beijing, …). Which other cities are important?

I think there’s a case for New Delhi, as the capital of India. It’s the third-largest country by GDP (PPP), soon-to-be the most populous country, high-growth, and a neighbour of China. Perhaps we’re neglecting it due to founder effects, because it has lower average wealth, because it’s universities aren’t thriving, and/​or because it currently has a nationalist government.

I also see a case for Singapore—that it’s government and universities could be a place from which to work on de-escalating US-China tensions. It’s physically and culturally not far from China. As a city-state, it benefits a lot from peace and global trade. It’s by far the most-developed member of ASEAN, which is also large, mostly neutral, and benefits from peace. It’s generally very technocratic with high historical growth, and is also the HQ of APEC.

• I feel Indonesia /​ Jakarta is perhaps overlooked /​ neglected sometimes, despite it being expected to be the world’s 4th largest economy by 2050:

• Jakarta—yep, it’s also ASEAN’s HQ. Worth noting, though, that Indonesia is moving its capital out of Jakarta.

• Yes, good point! My idle speculations have also made me wonder about Indonesia at least once.

• PPP-adjusted GDP seems less geopolitically relevant than nominal GDP, here’s a nominal GDP table based on the same 2017 PwC report (source), the results are broadly similar:

• I’d be curious to discuss if there’s a case for Moscow. 80,000 Hours’s lists being a Russia or India specialist under “Other paths we’re excited about”. The case would probably revolve around Russia’s huge nuclear arsenal and efforts to build AI. If climate change were to become really bad (say 4 degrees+ warming), Russia (along with Canada and New Zealand) would become the new hub for immigration given it’s geography -- and this alone could make it one of the most influential countries in the world.

• Getting advice on a job decision, efficiently (five steps)

When using EA considerations to decide between job offers, asking for help is often a good idea, even if those who could provide advice are busy, and their time is valued. This is because advisors can spend minutes of their time to guide years of yours. It’s not disrespecting their “valuable” time, if you do it right. I’ve had some experience as an advisor, both and as an advisee, and I think a safe bet is to follow the following several steps:

1. Make sure you actually have a decision that is will concretely guide months to years of your time, i.e. ask about which offer to take, not which company to apply for.

2. Distill the pros and cons, and neutral attributes of each option down to page or two of text, in a format that permits inline comments (ideally a GDoc). Specifically:

• To begin with, give a rough characterization of each option, describing it in neutral terms.

• Do mention non-EA considerations e.g. location preferences, alongside EA-related ones.

• Remove duplicates. If something is listed as a “pro” for option A, it need not also be listed as a “con” for option B. This helps with conciseness and helps avoid arbitrary double-counting of considerations. If there are many job offers, then simply choose some option A as the baseline, and measure the pros/​cons of each other option relative to option A, as in the “three-way comparison example” below.

• Merge pros/​cons that are related to one another. This also helps with conciseness and avoiding arbitrary double-counting

• Indicate the rough importance of various pros/​cons. If you think some consideration is more important, then you should explicitly mark it as so. You can mark considerations as strong (+++/​---) or weak (+/​-) if you want.

• Share it to experts whose time is less valuable before the paramount experts in your field,

4. Make sure the advisors have an opportunity to give you an all-things-considered judgment within the document (to allow for criticism), or privately, in case they are reluctant to share their criticism of some options.

5. To make a decision, don’t just add up the considerations in the list. Also, take into account the all-things-considered judgments of advisors (which includes expertise that they may not be able to articulate), as well as your personal instincts (which include self-knowledge that you may not be able to articulate).

• Would it be more intuitive to do your 3-way comparison the other way around—list the pros and cons of each option relative to FHI, rather than of FHI relative to each alternative?

• I agree that’s better. If I turn this into a proper post, I’ll fix the example.

Certain opportunities are much more attractive to the impact-minded than to regular academics, and so may be attractive, relative to how competitive they are.

• The secure nature of EA funding means that tenure is less important (although of course it’s still good).

• Some centers do research on EA-related topics, and are therefore more attractive, such as Oxford, GMU.

• Universities in or near capital cities, such as Georgetown, UMD College Park, ANU, Ghent, Tsinghua or near other political centers such as NYC, Geneva may offer a perch from which to provide policy input.

• Those doing interdisciplinary work may want to apply for a department that’s strong in a field other than their own. For example, people working in AI ethics may benefit from centers that are great at AI, even if they’re weak in philosophy.

• Certain universities may be more attractive due to being in an EA hub, such as Berkeley, Oxford, UCL, UMD College Park, etc.

Thinking about an academic career in this way makes me think more people should pursue tenure at UMD, Georgetown, and Johns Hopkins (good for both biosecurity and causal models of AI), than I thought beforehand.

• Overzealous moderation?

Has anyone else noticed that the EA Forum moderation is quite intense of late?

Back in 2014, I’d proposed quite limited criteria for moderation: “spam, abuse, guilt-trips, socially or ecologically destructive destructive advocacy”. I’d said then: “Largely, I expect to be able to stay out of users’ way!” But my impression is that the moderators have at some point after 2017 taken to advising and sanction users based on their tone, for example, here (Halstead being “warned” for unsubstantiated true comments), “rudeness” and “Other behavior that interferes with good discourse” being criteria for content deletion. Generally I get the impression that we need more, not less, people directly speaking harsh truths, and that it’s rarely useful for a moderator to insert themselves into such conversation, given that we already have other remedies: judging a user’s reputation, counterarguing, or voting up and down. Overall, I’d go as far as to conjecture that if moderators did 50% less (by continuing to delete spam, but standing down in the less clear-cut cases) the forum would be better off.

• Do we have any statistics on the number of moderator actions per year?

• Has anyone had positive or negative experiences with being moderated?

• Speaking as the lead moderator, I feel as though we really don’t make all that many visible “warning” comments (though of course, “all that many” is in the eye of the beholder).

I do think we’ve increased the number of public comments we make, but this is partly due to a move toward public rather than private comments in cases where we want to emphasize the existence of a given rule or norm. We send fewer private messages than we used to (comparing the last 12 months to the 18 months before that).

Since the new Forum was launched at the end of 2018, moderator actions (aside from deleting spam, approving posts, and other “infrastructure”) have included:

• Two temporary bans

• Phil Torres (one year, see link for explanation)

• rafa_fanboy (three months, from February—May 2019, for a pattern of low-quality comments that often didn’t engage with the relevant post)

• 26 private messages sent to users to alert them that their activity was either violating or close to violating the Forum’s rules. To roughly group by category, there were:

• 7 messages about rude/​insulting language

• Had no apparent connection to effective altruism, or

• Were very confusing, completely unintelligible, or written entirely in a language other than English

• 4 messages about infohazards or the unwelcome disclosure of someone’s personal information (not sure whether to call that an “infohazard”)

• 2 messages sent to users we suspected had been involved in mass downvoting

• To preserve the anonymity of votes, we created a system to contact these users without knowing any information about their accounts.

• Neither case was serious enough to warrant any action beyond a warning.

Of the 26 private messages, only 8 were sent in the last 12 months. I’m not sure what fraction of that change is “fewer posts that break the rules” vs. “moderation standards are a bit looser now” vs. “we’re more likely to do public rather than private moderation now” vs. “random chance”.

I searched through the history of the Slack channel used by Forum moderators, and the last year of my own comments, and found the following instances of public moderation:

1. 7/​2/​2020

2. 7/​27/​2020 (this was also the reason for one of the private messages I mentioned)

3. 1/​3/​2021

4. 1/​27/​2021

5. 3/​9/​2021

6. 3/​19/​2021

7. 3/​22/​2021

9. 5/​12/​2021 (Phil)

If you know of anything I’ve missed, please let me know, and I’ll add it to this list.

In some cases, I wrote and published the comment myself; in other cases, multiple mods reviewed the comment before it was posted. I do much more “active moderation” than the rest of the team, though we all share the work of reading new comments, reviewing content flagged by users, etc.

There are probably instances from before May 2020 as well, but I didn’t have time to track down all of those, and they seemed less relevant to concerns about moderation “of late”.

Overall, we send out roughly one moderation warning per month, and roughly half of the warnings involve concern over rudeness or insults. For context, over the last year, the Forum has averaged ~40 comments per day. (I don’t know how these numbers compare to moderation/​usage statistics from before the new Forum was launched.)

*****

Overall, I’ve gotten roughly equal amounts of negative feedback saying “there’s not enough moderation of rudeness, which makes the Forum unpleasant to use” and “there’s too much tone policing, which makes the Forum feel stifling”. This isn’t biased towards newer users wanting more moderation — people in the “not enough” group include a lot of experienced community members whose views I respect (same goes for the other group, of course).

Based on my recent exchanges with Dale and Halstead, I’ve updated slightly toward future moderation comments using more language like “we suggest that…” and less language like “please don’t do this”, to emphasize that we’re trying to maintain a set of norms, rather than cracking down on individual users.

Has anyone had positive or negative experiences with being moderated?

As a moderator, my perspective on this is obviously biased, but the modal reaction I get from the moderated person is something like “I get what you’re saying, but I don’t think I did anything wrong”. There are also a few cases of “I don’t get what you’re saying, this seems stupid” and a few cases of “actually, I think you’re right”.

I’d be interested in other users’ thoughts on this. I don’t think of myself as an especially skilled moderator, and I’ve certainly made mistakes before. I’m still trying to find the best ways to keep the Forum’s standards high (for quality of discussion and for kindness/​charitability) while ensuring that people feel comfortable sharing views that are speculative, unpopular, etc.

• I generally think more moderation is good, but have also pushed back on a number of specific moderation decisions. In general I think we need more moderation of the type “this user seems like they are reliably making low-quality contributions that don’t meet our bar” and less moderation of the type “this was rude/​impolite but it’s content was good”, of which there have been a few recently.

• Yeah, I’d revise my view to: moderation seems too stringent on the particular axis of politeness/​rudeness. I don’t really have any considered view on other axes.

• I don’t think of myself as an especially skilled moderator, and I’ve certainly made mistakes before

You’re a pretty good moderator.
Do you think some sort of periodic & public “moderation report” (like the summary above) would be convenient?

• Thanks!

I doubt that the stats I shared above are especially useful to share regularly (in a given quarter, we might send two or three messages). But it does seem convenient for people to be able to easily find public moderator comments.

In the course of writing the previous comment, I added the “moderator comment” designation to all the comments that applied. I’ll talk to our tech team about whether there’s a good way to show all the moderator comments on one page or something like that.

• I actually think you are an unusually skilled moderator, FWIW.

• Thanks, this detailed response reassures me that the moderation is not way too interventionist, and it also sounds positive to me that the moderation is becoming a bit more public, and less frequent.

• I don’t have a view of the level of moderation in general, but think that warning Halstead was incorrect. I suggest that the warning be retracted.

It also seems out of step with what the forum users think—at the time of writing, the comment in question has 143 Karma (56 votes).

• Could it be useful for moderators to take into account the amount of karma /​ votes a statement receives?

I’m no expert here, and I just took a bunch of minutes to get an idea of the whole discussion—but I guess that’s more than most people who will have contact with it. So it’s not the best assessment of the situation, but maybe you should take it as evidence of what it’d look like for an outsider or the average reader.
In Halstead’s case, the warning sounds even positive:

However, when I discussed the negative claims with Halstead, he provided me with evidence that they were broadly correct — the warning only concerns the way the claims were presented. While it’s still important to back up negative claims about other people when you post them, it does matter whether or not those claims can be reasonably backed up.

I think Aaron was painstakingly trying to follow moderation norms in this case; otherwise, moderators would risk having people accuse them of taking sides. I contrast it with Sean’s comments, which were more targeted and catalysed Phil’s replies, and ultimately led to the latter being banned; but Sean disclosed evidence for his statements, and consequently was not warned.

• (Sharing my personal views as a moderator, not speaking for the whole team.)

Could it be useful for moderators to take into account the amount of karma /​ votes a statement receives?

See my response to Larks on this:

One could reasonably interpret karma as demonstrating that many people thought a comment was valuable for public discussion.

However, I am exceedingly wary of changing the way moderation works based on a comment’s karma score [...] while some users contribute more value to Forum discussion than others, and karma can be a signal of this, I associate the pattern of “giving ‘valued’ users more leeway to bend rules/​norms” with many bad consequences in many different settings.

Even if we make a point to acknowledge how useful a contribution might have been, or how much we respect the contributor, I don’t want that to affect whether we interpret it as having violated the rules. We can moderate kindly, but we should still moderate.

• People often tell me that they encountered EA because they were Googling “How do I choose where to donate?”, “How do I choose a high-impact career?” and so on. Has anyone considered writing up answers to these topics as WikiHow instructionals? It seems like it could attract a pretty good amount of traffic to EA research and the EA community in general.

• I’m interested in funding someone with a reasonable track record to work on this (if WikiHow permits funding). You can submit a very quick-and-dirty funding application here.

• How the Haste Consideration turned out to be wrong.

In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:

1. Exponential-looking movement growth will (almost certainly) level off eventually, once the ideas reach the susceptible population. So earlier outreach really only causes the movement to reach its full size at an earlier point. This has been learned from experience, as movement growth was north of 50% around 2010, but has since tapered to around 10% per year as of 2018-2020. And I’ve seen similar patterns in the AI safety field.

2. When you recruit someone, they may do what you want initially. But over time, your ideas about how to act may change, and they may not update with you. This has been seen in practice in the EA movement, which was highly intellectual and designed around values, rather than particular actions. People were reminded that their role is to help answer a question, not imbibe a fixed ideology. Nonetheless, members’ habits and attitudes crystallised—severely—so that now, when leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement! The same thinking persists several years later. [Edit: this doesn’t counter the haste consideration per se. It’s just one way that recruitment is less good than one might hope-> See AGB’s subthread].

3. The returns from one person’s movement-building activities will often level off. Basically, it’s a lot easier to recruit your best friends, than the rest of your friends. Much easier to recruit your friends of friends, than their friends. Harder to recruit once you leave university as well. I saw this personally—the people who did the most good in the EA movement with me, and/​or due to me were among my best couple of friends from high school, and some of my best friends from the local LessWrong group. These efforts at recruitment during my university days seem potentially much more impactful than my direct actions. However, more recent efforts at recruitment and persuasion have also made differences, but they have been more marginal, and seem less impactful than my own direct work.

Taking all of this together, I’ve sometimes recommended university students not spend too much time on recruitment. The advice especially applies to top students, who could become a distinguished academic or policymaker later on—as their time may be better spent preparing for that future. My very rough sense is that for some, the optimal amount of time to spend recruiting may be one full-time months. For others, a full-time year. And importantly, our best estimates may change over time!

• I have a few thoughts here, but my most important one is that your (2), as phrased, is an argument in favour of outreach, not against it. If you update towards a much better way of doing good, and any significant fraction of the people you ‘recruit’ update with you, you presumably did much more good via recruitment than via direct work.

Put another way, recruitment defers to question of how to do good into the future, and is therefore particularly valuable if we think our ideas are going to change/​improve particularly fast. By contrast, recruitment (or deferring to the future in general) is less valuable when you ‘have it all figured out’; you might just want to ‘get on with it’ at that point.

***

It might be easier to see with an illustrated example:

Let’s say in the year 2015 you are choosing whether to work on cause P, or to recruit for the broader EA movement. Without thinking about the question of shifting cause preferences, you decide to recruit, because you think that one year of recruiting generates (e.g.) two years of counterfactual EA effort at your level of ability.

In the year 2020, looking back on this choice, you observe that you now work on cause Q, which you think is 10x more impactful than cause P. With frustration and disappointment, you also observe that a ‘mere’ 25% of the people you recruited moved with you to cause Q, and so your original estimate of two years actually became six months (actually more because P still counts for something in this example, but ignoring that for now).

This looks bad because six months < one year, but if you focus on impact rather than time spent then you realise that you are comparing one year of work on cause P, to six months of work on cause Q. Since cause Q is 10x better, your outreach 5x outperformed direct work on P, versus the 2x you thought it would originally.

***

You can certainly plug in numbers where the above equation will come out the other way—suppose you had 99% attrition—but I guess I think they are pretty implausible? If you still think your (2) holds, I’m curious what (ballpark) numbers you would use.

• Good point—this has changed my model of this particular issue a lot (it’s actually not something I’ve spent much time thinking about).

I guess we should (by default) imagine that if at time T you recruit a person, that they’ll do an activity that you would have valued, based on your beliefs at time T.

Some of us thought that recruitment was even better, in that the recruited people will update their views over time. But in practice, they only update their views a little bit. So the uncertainty-bonus for recruitment is small. In particular, if you recruit people to a movement based on messaging in cause A, you should expect relatively few people to switch to cause B based on their group membership, and there may be a lot of within-movement tensions between those that do/​don’t.

There are also uncertainty-penalties for recruitment. While recruiting, you crystallise your own ideas. You give up time that you might’ve used for thinking, and for reducing your uncertainties.

On balance, recruitment now seems like a pretty bad way to deal with uncertainty.

• [ ]
[deleted]
• EAs have reason to favour Top-5 postdocs over Top-100 tenure?

A bunch of people face a choice between being a postdoc at one of the top 5 universities, and being a professor at one of the top 100 universities. For the purpose of this post, let’s set aside the possibilities of working in industry, grantmaking and nonprofits. Some of the relative strengths (+) of the top-5 postdoc route are accentuated for EAs, while some of the weaknesses (-) are attenuated:

+larger university-based EA communities, many of which are at top-5 universities

-less secure research funding (less of an issue in longtermist research)

-less career security (less important for high levels of altruism)

-can’t be a sole-supervisor of a PhD student (less important if one works with a full-professor who can supervise, e.g. at Berkeley or Oxford).

-harder to set up a centre (this one does seem bad for EAs, and hard to escape)

There are also considerations relating to EAs’ ability to secure tenure. Sometimes, this is decreased a bit due to the research running against prevailing trends.

Overall, I think that some EAs should still pursue professorships, especially to set up research centres, or to establish a presence in an influential location but that we will want more postdocs than is usual.

• A quite obvious point that may still be worth making is that the balance of the considerations will look very different for different people. E.g. if you’re able to have a connection with a top university while being a professor elsewhere, that could change the calculus. There could be numerous idiosyncratic considerations worth taking into account.

• I once got the advice from highly successful academics (tenured ivy league profs) that if you want become an academic you should “resist the temptation of the tenure track for as long as possible” and rather do another post-doc.

Once you enter the tenure track, the clock starts ticking and by the end of it, your tenure will be judged by your total publication record. If you do (another) postdoc before entering the tenure track you’ll have more publications in the pipeline, which will give you a competitive edge. This might also increase your chances of getting more competitive professorship.

By the same token, it perhaps pays to do pre-doctoral fellowships and master’s degrees. This is also important for picking a Euro vs. US PhD where the 3 year Euro PhD might better for people who do not want to go into academia whereas the 5 year+ US PhD might be better for academia.

• Making community-building grants more attractive

An organiser from Stanford EA asked me today how community building grants could be made more attractive. I have two reactions:

1. Specialised career pathways. To the extent that this can be done without compromising effectiveness, community-builders should be allowed to build field-specialisations, rather than just geographic ones. Currently, community-builders might hope to work at general outreach orgs like CEA and 80k. But general orgs will only offer so many jobs. Casting the net a bit wider, many activities of Forethought Foundation, SERI, LPP, and FLI are field-specific outreach. If community-builders take on some semi-specialised kinds of work in AI, or policy, or econ, (in connection with these orgs or independently) then this would aid their prospects of working for such orgs or returning to a more mainstream pathway.

2. Owning it”. To the extent that community building does not offer a specialised career pathway, the fact that it’s a bold move should be incorporated into the branding. The Thiel Fellowship offers $100k to ~2 dozen students per year, to drop out of their programs to work on a startup that might change the world. Not everyone will like it, but it’s bold, it’s a round, and reasonably-sized number, with a name attached, and a dedicated website. Imagine a “Macaskill fellowship” that offers$100k for a student from a top university to pause their studies and spend one year focusing on promoting prioritisation and long-term thinking—it’d be a more attractive path.

• The Safety/​Capabilities Ratio

People who do AI safety research sometimes worry that their research could also contribute to AI capabilities, thereby hastening a possible AI safety disaster. But when might this be a reasonable concern?

We can model a researcher i as contributing intellectual resources of to safety, and to capabilities, both real numbers. We let the total safety investment (of all researchers) be , and the capabilities investment be . Then, we assume that a good outcome is achieved if , for some constant , and a bad outcome otherwise.

The assumption about could be justified by safety and capabilities research having diminishing return. Then you could have log-uniform beliefs (over some interval) about the level of capabilities required to achieve AGI, and the amount of safety research required for a good outcome. Within the support of and , linearly increasing , will linearly increase the chance of safe AGI.

In this model, having a positive marginal impact doesn’t require us to completely abstain from contributing to capabilities. Rather, one’s impact is positive if the ratio of safety and capabilities contributions is greater than the average of the rest of the world. For example, a 50% safety/​50% capabilities project is marginally beneficial, if the AI world focuses only 3% on safety.

If the AI world does only focus 3% on safety, then when is nervousness warranted? Firstly, technical researchers might make a big capabilities contribution if they are led to fixate on dangerous schemes that lie outside of current paradigms, like self-improvement perhaps. This means that MIRI’s concerns about information security are not obviously unreasonable. Secondly, AI timeline research could lead one to understand the roots of AI progress, and thereby set in motion a wider trend toward more dangerous research. This could justify worries about the large compute experiments of OpenAI. It could also justify worries about the hypothetical future in which an AIS person launches a large AI projects for the government. Personally, I think it’s reasonable to worry about cases like these breaching the 97% barrier.

It is a high bar, however. And I think in the case of a typical AI safety researcher, these worries are a bit overblown. In this 97%-capabilities world, the median person should worry a bit less about abstaining from safety contribution, and a bit more about the size of their contribution to safety.

• EA Tweet prizes.

Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.

Reasons this might be better than the EA Forum Prize:

1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively

2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.

One would have to check the rules and regulations.

• The Emergent Ventures Prize is an example of a prize scheme that seems good to me: giving $100k prizes to great blogs, wherever on the internet they’re located. • I read every Tweet that uses the phrase “effective altruism” or “#effectivealtruism”. I don’t think there are many EA-themed Tweets that make novel points, rather than linking to existing material. I could easily be missing Tweets that don’t have these keywords, though. Are there any EA-themed Tweets you’re thinking of that really stood out as being good? • Tom Inglesby on nCoV response is one recent example from just the last few days. I’ve generally known Stefan Schubert, Eliezer Yudkowsky, Julia Galef, and others to make very insightful comments there. I’m sure there are very many other examples. Generally speaking, though, the philosophy would be to go to the platforms that top contributors are actually using, and offer our services there, rather than trying to push them onto ours, or at least to complement the latter with the former. • I agree with this philosophy, but remain unsure about the extent to which strong material appears on various platforms (I sometimes do reach out to people who have written good blog posts or Facebook posts to send my regards and invite them to cross-post; this is a big part of Ben Kuhn’s recent posts have appeared on the Forum, and one of those did win a prize). Aside from 1000-person-plus groups like “Effective Altruism” and “EA Hangout”, are there any Facebook groups that you think regularly feature strong contributions? (I’ve seen plenty of good posts come out of smaller groups, but given the sheer number of groups, I doubt that the list of those I check includes everything it should.) ***** I follow all the Twitter accounts you mentioned. While I can’t think of recent top-level Tweets from those accounts that feel like good Prize candidates, I think the Tom Inglesby thread is great! One benefit of the Forum Prize is that it (ideally) incentivizes people to come and post things on the Forum, and to put more effort into producing really strong posts. It also reaches people who deliberately worked to contribute to the community. If someone like Tom Inglesby was suddenly offered, say,$200 for writing a great Twitter thread, it’s very unclear to me whether this would lead to any change in his behavior (and it might come across as very odd). Maybe not including any money, but simply cross-posting the thread and granting some kind of honorary award, could be better.

Another benefit: The Forum is centralized, and it’s easy for judges to see every post. If someone wants to Tweet about EA and they aren’t already a central figure, we might have a hard time finding their material (and we’re much more likely to spot, by happenstance, posts made by people who have lots of followers).

That said, there’s merit to thinking about ways we can reach out to send strong complimentary signals to people who produce EA-relevant things even if they’re unaware of the movement’s existence. Thanks for these suggestions!