Open Philanthropy is seeking proposals for outreach projects
Open Philanthropy is seeking proposals from applicants interested in growing the community of people motivated to improve the long-term future via the kinds of projects described below.[1]
Apply to start a new project here; express interest in helping with a project here.
We hope to draw highly capable people to this work by supporting ambitious, scalable outreach projects that run for many years. We think a world where effective altruism, longtermism, and related ideas are routine parts of conversation in intellectual spaces is within reach, and we’re excited to support projects that work towards that world.
In this post, we describe the kinds of projects we’re interested in funding, explain why we think they could be very impactful, and give some more detail on our application process.
Proposals we are interested in
Programs that engage with promising young people
We are seeking proposals for programs that engage with young people who seem particularly promising in terms of their ability to improve the long-term future (and may have interest in doing so).
Here, by “particularly promising”, we mean young people who seem well-suited to building aptitudes that have high potential for improving the long-term future. Examples from the linked post include aptitudes for conducting research, advancing into top institutional roles, founding or supporting organizations, communicating ideas, and building communities of people with similar interests and goals, among others. Downstream, we hope these individuals will be fits for what we believe to be priority paths for improving the long-term future, such as AI alignment research, technical and policy work reducing risks from advances in synthetic biology, career paths involving senior roles in the national security community, and roles writing and speaking about relevant ideas, among others.
We’re interested in supporting a wide range of possible programs, including summer or winter camps, scholarship or fellowship programs, seminars, conferences, workshops, and retreats. We think programs with the following characteristics are most likely to be highly impactful:
They engage people ages 15 − 25 who seem particularly promising in terms of their ability to improve the long-term future, for example people who are unusually gifted in STEM, economics, philosophy, writing, speaking, or debate.
They cover effective altruism (EA), rationality, longtermism, global catastrophic risks, or related topics.
They involve having interested young people interact with people currently working to improve the long-term future.
Examples of such programs that Open Philanthropy has supported include SPARC, ESPR, the SERI and FHI summer research programs, and the recent EA Debate Championship. However, we think there is room for many more such programs.
We especially encourage program ideas which:
Have the potential to engage a large number of people (hundreds to tens of thousands) per year, though we think starting out with smaller groups can be a good way to gain experience with this kind of work.
Engage with groups of people who don’t have many ways to enter relevant intellectual communities (e.g. they are not in areas with high concentrations of people motivated to improve the long-term future).
Include staff who have experience working with members of the groups they hope to engage with—in particular, experience talking with young people about new ideas while being respectful of their intellectual autonomy and encouraging independent intellectual development.
We encourage people to have a low bar for submitting proposals to our program, but note that we view this as a sensitive area: we think programs like these have the potential to do harm by putting young people in environments where they could have negative experiences .The Community Health and Special Projects team at the Centre for Effective Altruism (email communityhealth@centreforeffectivealtruism.org is available to provide advice on these kinds of risks.
Some reasons why we think this work has high expected value
A priori, we would guess that people are more likely to get interested in new ideas and opportunities when they are relatively young and have fewer preexisting commitments. This guess is consistent with the results of a survey Open Philanthropy recently ran—we surveyed approximately 200 people who our advisors suggested had the potential to do good longtermist work, most of whom had recently made career changes that we thought were positive from a longtermist perspective. As part of this survey, we asked respondents several questions regarding the age at which they first encountered effective altruism or effective altruism-adjacent ideas.
On average, survey respondents reported first encountering EA/EA-adjacent ideas when they were 20 years of age.
About 25% of respondents first encountered EA/EA-adjacent ideas at ages 18 or below, even though few EA outreach projects focus on that age range.
On average, respondents said the best age for them to first encounter EA/EA-adjacent ideas would have been 16.
Survey respondents often mentioned that hearing about EA before starting university would have been particularly helpful because they could have planned how to use their time at university better, e.g. what to major in.
We also asked survey respondents to brainstorm open-endedly about how to get people similar to them interested in these ideas. 10% of responses mentioned starting outreach programs younger, particularly in high school. Several respondents mentioned that SPARC and ESPR had been helpful for them and that they would recommend these programs to similar people. (Certain other high school outreach projects have reported less success, but we don’t think these less-targeted programs provide much evidence about how promising targeted high school outreach is likely to be overall, as discussed here.)
Our survey also showed that EA groups, particularly university groups, have had a lot of impact on longtermist career trajectories. On a free-form question asking respondents to list the top few things that increased their expected impact, respondents listed EA groups more commonly than any other factor. On other measures of impact we used in our survey analysis, EA groups came between second and fourth in potential factors, above many EA organizations and popular pieces of writing in the EA-sphere. Most of this impact (65 − 75% on one measure) came from university groups. We think this suggests that, more generally, offering high-quality opportunities for university students to get involved is a promising kind of intervention.
Made-up examples of programs we think could be impactful
These examples are intended to be illustrative of the kinds of programs we’d be interested in funding. This is not intended to be a comprehensive list, nor a list of the programs we think would be most impactful.
We think these programs are unlikely to work fully as written. Founders generally have to dive deep into a project plan to figure out what’s tenable, altering their plan multiple times as they get a better understanding of the space, and we haven’t done that work. As such, we’d like these examples to serve as inspiration, not as instructions. We think programs of this kind are more likely to be successful when the founders develop their own vision and understanding of their target audience.
We would ultimately like to support dedicated teams or organizations that run programs for young people at scale. That said, we are likely to recommend that applicants with less of a track record start by trying out a small pilot of their program and iterating while maximizing program quality and target fit, rather than scaling immediately.
Example 1: A free two-week summer school in Oxford that teaches content related to longtermism to promising high school students. The program could have a similar structure to SPARC and ESPR, but with a more explicitly longtermist focus, and it could engage a broader range of gifted high school students.
We think programs like this are most effective when they focus on highly promising students, e.g. by filtering on Olympiad participation, high standardized test scores, competitive awards, or other markers of talent.
Oxford seems like a good location for programs like this because its status as an EA hub makes it easy for current longtermists doing good work to instruct and interact with students, which we think is important for programs like this to be successful. (Berkeley and Stanford seem like good locations for similar reasons.)
Oxford is a cool place to visit in and of itself, making a program located there attractive as a paid trip for high school students.
Example 2: A monthly AI safety workshop for computer science undergraduates, covering existing foundational work in AI safety.
There have been several programs like this, notably AIRCS, which our survey suggests has had an impact on some longtermist career trajectories. We think it’s likely that AIRCS hasn’t saturated the pool of top computer science undergraduates, and that there is room for more programs of this form that experiment with different kinds of content and instructors.
Example 3: A one-week summer program about effective altruism in Berkeley combined with a prestigious $20,000 merit-based scholarship for undergraduate students. The scholarship would involve an application process that required substantial engagement with ideas related to effective altruism, e.g. a relevant essay and an interview.
We think the best scholarship programs will be fairly selective, so as to attract very promising applicants and create a very strong cohort.
In-person programs that run right before students start their undergraduate degrees might be particularly impactful, via bolstering EA groups at top universities.
Scholarships and other programs that include substantial financial opportunities risk attracting applicants that are only interested in the money provided by the program. We think programs like this should construct application processes that make an effort to identify applicants genuinely interested in effective altruism, e.g. via essays and interviews.
Example 4: A monthly four-day workshop teaching foundational rationality content to promising young people.
The workshop could teach foundational technical topics in rationality, including some covered by CFAR in the past, e.g. probability theory, Bayesianism, Fermi estimation, calibration, betting, cognitive biases, etc., as well as exercises intended to help students use these thinking tools in the real world.
This could overlap heavily with SPARC’s content, but could engage a larger number of people per year than SPARC has capacity for, as well as a more varied or substantively different audience.
Example 5: A fall jobs talk and follow-up discussion that’s held at top universities describing career paths in defensive work for future biological catastrophes.
We think fall final year is a good time to prompt undergraduate students with concrete career suggestions, and with COVID-19 in recent memory, we think the next few years could be a particularly good time to talk to students about careers in global catastrophic biological risk reduction.
Projects aiming at widespread dissemination of relevant high-quality content
We are also seeking proposals for projects that aim to share high-quality, nuanced content related to improving the long-term future with large numbers of people. Projects could cover wide areas such as effective altruism, rationality, longtermism, or global catastrophic risk reduction, or they could have a more specific focus. We’re interested in supporting people both to create original content and to find new ways to share existing content.
Potential project types include:
Podcasts
YouTube channels
Massive open online courses (MOOCs)
New magazines, webzines, blogs, and media verticals
Books, including fiction
Strategic promotion of existing content (with the permission of the creators of the content, or their representatives), especially those that have historically drawn in promising individuals
Existing projects along these lines include the 80,000 Hours Podcast, Robert Miles’s AI alignment YouTube channel, and Vox’s Future Perfect.
We encourage projects that involve content in major world languages other than English, especially by native speakers of those languages—we think projects in other languages are especially likely to reach people who haven’t had as many opportunities to engage with these ideas.
We would like interested people to have a low bar for submitting a proposal, but we think projects that misrepresent relevant ideas or present them uncarefully can do harm by alienating individuals who would have been sympathetic to them otherwise. We also think it’s important to be cognizant of potential political and social risks that come with content creation and dissemination projects in different countries. Nicole Ross at the Centre for Effective Altruism (email nicole@centreforeffectivealtruism.org) is available to provide advice on these kinds of risks.
Some reasons why we think this work has high expected value
Our sense from talking to people doing longtermist work we think is promising has been that, for many, particular pieces of writing or videos were central to their turn towards their current paths.
This seems broadly in line with the results of the survey we conducted mentioned above. The bodies of written work of Nick Bostrom, Eliezer Yudkowsky, and Peter Singer were in the top 10 sources of impact on longtermist career trajectories (of e.g. organizations, people, and bodies of work) across several different measures. On one measure, Nick Bostrom’s work by itself had 68% of the impact of the most impactful organization and 75% of the impact of the second most impactful organization. When asked what outreach would attract similar people to longtermist work, 8% of respondents in the survey gave free-form responses implying that they think simply exposing similar people to EA/EA-adjacent ideas would be sufficient.
These data points suggest to us that even absent additional outreach programs, sharing these ideas more broadly could ultimately result in people turning towards career activities that are high-value from a longtermist perspective. For many who could work on idea dissemination, we think increasing the reach of existing works with a strong track record, like those given above, may be more impactful per unit of effort than creating new content.
Made-up examples of projects we think could be impactful
As above, these examples are intended to be illustrative of the kinds of programs we’d be interested in funding. This is not intended to be a comprehensive list, nor a list of the programs we think would be most impactful. We think these programs are unlikely to work fully as written and would like these projects to serve as inspiration, not as instructions.
Example 1: Collaborations with high-profile YouTube creators to create videos covering longtermist topics.
We think YouTube is an attractive promotional platform because different creators come with different audiences, making it easy to share with the kinds of people who most often become interested in longtermist ideas.
Example 2: Targeted social media advertising of episodes of the 80,000 Hours podcast. The project would aim to maximize downloads of the 80,000 Hours Podcast episodes that go through social media referrals.
The 80,000 Hours podcast seems promising to promote because we think it’s high-quality, quick to consume, and varied enough in content to appeal to a fairly wide audience.
The project could experiment with indiscriminately advertising podcast episodes to promising early-career individuals, e.g. STEM, economics, or philosophy students, or with advertising select podcast episodes on particular topics to audiences that they may appeal to.
Any project of this form should be done in collaboration with 80,000 Hours.
Example 3: A website that delivers free copies of physical books, e-books, or audiobooks that seem helpful for understanding how to do an outsized amount of good to people with a .edu email address who request them.
The bulk of the project work could be focused on website design and advertising, while book distribution could be handled through EA Books Direct, or done as part of this project.
Example 4: A MOOC covering existing AI safety work.
Example 5: A new magazine that covers potentially transformative technologies and ways in which they could radically transform civilization in positive or negative ways.
Application process
Primary application
If you think you might want to implement either of the kinds of outreach projects listed above, please submit a brief pre-proposal here. If we are interested in supporting your project, we will reach out to you and invite you to submit more information. We encourage submissions from people who are uncertain if they want to found a new project and just want funding to seriously explore an idea. If it would be useful for applicants developing their proposals, we are open to funding them to do full-time project development work for 3 months. We are happy to look at multiple pre-proposals from applicants who have several different project ideas.
We may also be able to help some applicants (e.g. by introducing them to potential collaborators, giving them feedback about plans and strategy, providing legal assistance, etc.) or be able to help find others who can. We are open to and encourage highly ambitious proposals for projects that would require annual budgets of millions of dollars, including proposals to scale existing projects that are still relatively small.
We intend to reply to all applications within two months. We have also been in touch with the Effective Altruism Infrastructure Fund and the Long-Term Future Fund, and they have expressed interest in funding proposals in the areas we describe below. If you want, you can choose to have them also receive your application via the same form we are using.
There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.
Collaborator application
If you aren’t interested in starting something yourself, but you would be interested in collaborating on or helping with the kinds of outreach projects listed above (either full or part-time), let us know here. We will connect you to project leads if we feel like there is a good fit for your skills and interests.
If you have any questions, please contact longtermfuture-outreach-rfp@openphilanthropy.org.
- ↩︎
Our work in this space is motivated by a desire to increase the pool of talent available for longtermist work. We think projects like the ones we describe may also be useful for effective altruism outreach aimed at other cause areas, but we (the team running this particular program, not Open Philanthropy as a whole) haven’t thought through how valuable this work looks from non-longtermist perspectives and don’t intend to make that a focus.
- Lessons from Running Stanford EA and SERI by 20 Aug 2021 14:51 UTC; 263 points) (
- What we learned from a year incubating longtermist entrepreneurship by 30 Aug 2021 8:39 UTC; 199 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 176 points) (
- List of EA funding opportunities by 26 Oct 2021 7:49 UTC; 173 points) (
- A personal take on longtermist AI governance by 16 Jul 2021 22:08 UTC; 173 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 168 points) (LessWrong;
- Some thoughts on recent Effective Altruism funding announcements by 3 Mar 2022 15:53 UTC; 116 points) (
- Open Philanthropy is seeking proposals for outreach projects by 16 Jul 2021 21:19 UTC; 61 points) (LessWrong;
- [AMA] Open Philanthropy is still seeking proposals for outreach and community-building projects by 17 Aug 2022 4:01 UTC; 50 points) (
- Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism by 29 Dec 2021 20:20 UTC; 46 points) (
- Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism by 29 Dec 2021 20:30 UTC; 33 points) (LessWrong;
- [AN #157]: Measuring misalignment in the technology underlying Copilot by 23 Jul 2021 17:20 UTC; 28 points) (LessWrong;
- (Re)considering the Aesthetics of EA by 20 May 2022 15:01 UTC; 24 points) (
- EA Updates for August 2021 by 6 Aug 2021 13:21 UTC; 21 points) (
- GWWC July 2021 Newsletter by 29 Jul 2021 5:24 UTC; 6 points) (
- 17 Jul 2021 15:31 UTC; 4 points) 's comment on Propose and vote on potential EA Wiki entries by (
Minor suggestion: Those forms should send responses after you submit, or give the option “would you like to receive a copy of your responses”
Otherwise, it may be hard to clarify whether a submission went through, or details of what you submitted
Changed, thanks for the suggestion!
One more unsolicited outreach idea while I’m at it: high school career / guidance counselors in the US.
I’m not sure how idiosyncratic this was of my school, but we had this person whose job it was to give advice to older highschool kids about what to do for college and career. Mine’s advice was really bad and I think a number of my friends would have glommed onto 80k type stuff if it was handed to them at this time (when people are telling you to figure out your life all of a sudden). This probably hits the 16yo demographic pretty well.
Could look like adding a bit of entrypoint content geared at pre-college students to 80k, then making some info packets explaining 80k to counselors as a nonprofit career planning resource with handouts for students, and shipping them to every high school in the US or smth (possibly this is also an international thing, IDK).
This could lead to quite a bit of cost-effective positive impact on students, especially those who already have an interest in choosing a career that has positive social consequences. Many students, in my experience, would be very happy to consider higher-impact careers if they had a little wisely-presented encouragement at the right juncture. Such materials would not have to be extensive, and they could be tied to online content that goes deeper into the topic or even provides some interaction.
That said, the above OP call for proposals seems highly oriented towards students at elite schools, or elite students at other schools, and specifically is aimed at students heading for a university education. I might suggest that we should be considering how young people likely to enter other professions, be they white- or blue-collar, might benefit from an understanding of these topics (e.g., those listed above: EA, rationality, longtermism, and global catastrophic risk reduction).
I will start a discussion on this in the proper forum...but this much larger group of future consumers/workers/influencers/voters should not be ignored. Charities need staff at many levels, and people in many vocations can incorporate these ideas into their work, giving, volunteering, and political activities. Is it too soon for EA to open up to a broader audience?
If anyone stumbles upon this later, I imagine they may also be interested in Open Phil’s call for course development grant applications: https://www.openphilanthropy.org/focus/other-areas/open-philanthropy-course-development-grants
Thanks for this detailed post! This is interesting. I wanted to highlight this part for people who might not have read it, and I have a question about it:
Is OpenPhil willing to say how much they are willing to give in total per year for these kinds of outreach projects? I’m curious to know, and others might be too.
There’s no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.
Exciting!
This is probably not be the best place to post this but I’ve been learning recently about the success of hacking games in finding and training computer security people (https://youtu.be/6vj96QetfTg for a discussion, also this game I got excited about in high school: https://en.m.wikipedia.org/wiki/Cicada_3301).
I think there might be something to an EA/ rationality game. Like something with a save-the-world but realistically plot and game mechanics built around useful skills like Fermi estimation. This is a random gut feeling I’ve had for a while not something well thought through, so could be obviously wrong.
A couple advantages over the typical static content like videos or written intro sequences:
games can be “stickier”
ppl seem to enjoy intricate, complex games even while avoiding complex static media for lack of time; this is true of many high-school aged ppl in my experience
games can tailor different angles into EA material depending on the user’s input
games can both educate and filter for/ identify people who are high aptitude, contra to written content or video
because games can collect info about user behavior, you might have a much richer sense of where people are dropping out to prototype/ AB test on
anecdotally, smart ppl I went to highschool with seemed to have their career aspirations shaped by videogames, primarily toward wanting to do computer science to be game developers. Maybe this could be channelled elsewhere?
A few downsides of games
limited to a particular demographic interested in videogames
a lot of rationality/ EA stuff seems maybe quite hard to gamify?
maybe a game makes EA stuff seem fantastical
maybe a game would degrade nuance/ epistemics of content
maybe games are quite expensive to make for what they are?
I have zero expertise or qualifications except occasionally playing games, but feel free to DM me anyway if you are interested in this :)
I thought Decision Problem: Paperclips introduced a subset of AI risk arguments fairly well in gamified form, but I’m not aware of anybody where the game made them become interested enough in AGI alignment/risk/safety enough to work on it. Does anybody else on this forum have data/anecdata?
I’m thrilled to hear about this!
I can’t find any deadline. How long should I expect this opportunity to stay open?
(I’m not applying myself but I’ll probably encourage some other people to do so.)
Sorry this was unclear! From the post:
I will bold this so it’s more clear.
Hi Linda,
It says here on the page on the website:
You’re right, I don’t immediately see it in the actual post, so it’s unclear.