I want Future Perfect, but for science publications

Thanks to JP, Will Bradshaw, Arden Koehler, and several others for helpful comments /​ suggestions.

1. Introduction

Positive public opinion is an important asset for convincing really smart people to work on the most important—yet still socially unsexy—problems, such as AI safety. I’m going to lay out the case for establishing EA-adjacent columns in reputable science/​tech publications (e.g. MIT Tech Review, Scientific American, etc), and how—given the precedence of similar ventures and a fair number of EA science writers—there’s a strong possibility this idea becomes reality.

A note: I have a publication—X—in mind which I won’t name in this post, but feel free to reach out if you’re interested!

2. Problem

Observations are based on personal experience from retreats and community building for Harvard/​MIT.

An outsized proportion of the progress made in the field of AI safety comes from a minority of brilliant EAs (e.g. Paul Christiano) working on hard problems, rather than simply increasing member counts and recruiting capable but otherwise unexceptional students, even when they’re from a fancy school.

Additionally, it’s often hard to convince highly-talented CS students to work on AI safety when there are other competing career paths that are considerably more appealing, such as finance, Big Tech, and transformative startups. The general sentiment seems to be: “Wow, seems important. Hopefully someone else solves this.” There are 2 main reasons:

  1. It’s hard to know where to start; the problem feels like a black box.

  2. For all of its importance and neglectedness, working on minimizing AI risk is not yet an appealing problem to work on.

    1. Headlines frequently evangelize the amazing work responsible for computer vision advancing beyond human capabilities, and deep learning systems beating the world’s best at Go, but we rarely hear about AI safety triumphs, even when there are breakthroughs. Considering the technical rigor required and also the significant impact it has on the future going well, we need to hear more about these successes. This is the problem I’ll attempt to address.

3. Solution

One possible solution is reaching out to students in groups with a high density of technical prowess (e.g. IMO camps). Others have suggested similar ideas, so this proposal will focus on a different form of outreach.

There should be a shift in the public discourse towards focusing more on AI safety and existential risk in general. I claim that working with a reputable publication organization and having libraries of articles to direct excited students towards will gradually make it easier to recruit von Neumanns. Some great examples of outreach in this category include The Precipice and the Future Perfect column at Vox, spearheaded by folks like Kelsey Piper, Ezra Klein, and Dylan Matthews. Back in 2019, Kelsey suggested that more initiatives like FP at other organizations could do quite a lot of good, and I’m not aware of any explicit expansions into this space. I would love to hear about some other EA-adjacent columns like this that currently exist, and I’m excited about generating more initiatives like these!

Specifically, I want to launch an X-Risk or EA-adjacent column, similar to Future Perfect, for large science/​tech publications like X. I think this would be quite impactful and tractable.

3.1 What Impact will this have?

Working with existing organizations come with major perks. Readership stats for X hover around 5 million unique monthly readers and 500k estimated print subscriptions. Those are big numbers! Each month, there are around 20 articles, which translates into an average of 250,000 views per article. Some pieces will have millions and others in the low deci-thousands; but overall, pretty good.

Having articles on AI Safety (and other topics) in reputable publications like X also serves as a powerful signal that we can point people to (with very real nerdsnipe potential). To illustrate, this is how a conversation might pan out:

A fictional world with the AI Safety column:

Person 1: Hi (insert smart IMO /​ CS person), have you read the latest article from X on AI Safety?

Person 2: Not yet, what is it about?

Person 1: They cover really interesting progress made on transformers, CNNs,...

Person 2: Wow! This sounds really important, and it seems like there’s a community dedicated to working on these problems.

In addition, we’ll begin to transform the academic dialogue on existential risks. Tech publications can powerfully signal to other digital media organizations by writing about concerns around existential risks and starting public discussions.

Finally, we’ll be tapping into a highly academic and techno-centric audience: exactly the types of people that we want to work on hard problems like Safety. Vox’s Future Perfect content is great but Vox is often associated with US Liberal politics, and in general it seems favorable to have demographic diversification.

3.2 This sounds promising, but how Tractable is it?

There are 2 ways of approaching this question. We could either pitch an entire column or one-off articles (most places take pitches for opinion articles). Pitching a column seems strictly less tractable than single articles, but it has properties that make it worth considering.

Firstly, there’s precedent for EA-esque buy-in at X. They’ve covered content on the dangers of unaligned AI, including an article outlining risks associated with GPT-3, and the writers advocate caution when it comes to large AI systems. Additionally, they currently have a column covering the shifting technological landscape around Covid-19. It’s not a leap to go from “Covid is bad” to “future pandemics are bad” to “we should care about the long-term future.” Their coverage on the pandemic lab leak stories has also been fair and comprehensive from early on, unlike other publications. This being said, it’s hard to say whether the EA version of this column should be covering a lot of biosecurity content.

Secondly, it’s no secret that digital journalism has been struggling in these changing times. Large tech publications, while less Funding-constrained™ than smaller magazines, seem likely to happily accept proposals that come with a significant grant attached. Vox Future Perfect start after a grant from the Rockefeller Foundation, which is a foundation that supports many other similar publications. Furthermore, when a prominent political journalist was asked what forms of leverage are the most useful for pitching new columns, their answer was clear and concise: “Money.” Approaching these publications with the goal of starting a column with generous funding in hand seems to be a powerful value add.

Finally, there are already EAs in journalism. Michael Specter, a writer for the New Yorker, is co-teaching Safeguarding the Future with Kevin Esvelt, which is a course that focuses on introducing X-risk ideas to MIT students. He has written several pieces on biosecurity and it’s clear that he cares about X-risk. Additionally, Kelsey Piper and all the other people at Vox have shown that this can be great. This foundation provides strong evidence that there exists an appetite for these ideas, and also that there are writers who are willing to cover this type of content.

4. Caveats

  1. If we don’t have an EA editor leading this column, then it won’t be nearly as good, and probably would match the status quo

    • This is super important. It seems true that we’d want an EA to be in charge of the column, and otherwise the content might still be slightly-aligned but vulnerable to value shift over time.

  2. In the case that we get more excitement for AI safety, we might be selecting more for people who only care about the problem because it’s socially appealing, i.e. actors optimizing for status.

    • I think in response, this seems potentially accurate, but there are other factors. It might just be that we get more risk-averse people, and we might get more people who otherwise would have worked on Safety, but haven’t been exposed to it as a problem. It’s also good to highlight to the broader AI field that Safety already has infrastructure and a dedicated community.

  3. It seems like AI and bio would be the most likely topics to be well-received at X, but there’s some uncertainty as to how risky they are to cover.

    • Seems true, I think I feel more comfortable about coverage on AI than biorisk (despite personally working more on biosecurity problems), but there should certainly be thoughtful editors reading over these pieces before publication. I’m mostly worried about biosecurity content that dives heavily into the specifics, rather than articles introducing technologies and pandemic prevention initiatives. In AI, I’m worried about accelerating “race-to-the-bottom” scenarios, through coverage on, say, China’s new LLM that far outstrips GPT-3. The relative contribution of these articles to this outcome seems fairly negligible, but we should still be careful.

  4. Many people have pointed out that it seems challenging to get something like this done, since these institutions are usually pretty rigid.

    • I slightly disagree. Future Perfect and similar columns were founded after grants by the Rockefeller Foundation and other foundations, which updates me towards: money + good science writers = considerable leverage. In addition, there are so many nice properties that publications like X have, which makes me think that there’s some real potential here. But I’m also writing this in part because I want to hear ideas about the best way to approach large digital media organizations.

5. Next Steps

  1. Talk to writers like Kelsey and Dylan about the process of setting up Future Perfect, other EA science writers, and this project idea more generally.

  2. Assemble a team of writers and someone with Editor-status.

  3. Map out a plan for how to frame this pitch.

I’m keen to chat about the idea! And if you know people who might be interested in writing for something like this, feel free to email me at James218.lin@gmail.com. Also would love to hear if this has been attempted in some other capacity before, outside of Future Perfect.