Project ideas / active grantmaking ideas I collected
Context: What follows is a copy of a doc I made quickly in June/July 2021. Someone suggested I make it into a Forum post. But I think there are other better project idea lists, and more coming soon. And these ideas aren’t especially creative, ambitious, or valuable, and I don’t want people to think that they should set their sights as low as I accidentally did here. And this is now somewhat outdated in some ways. So I’m making it just a shortform rather than a top-level post, and I’m not sure whether you should bother reading it.
There are some interesting comment threads in the doc version.
I’m using this doc to collect “active grantmaking” ideas (i.e., things I’d maybe want EA funders to proactively find a way to fund, rather than waiting for an application). I’m approaching this in a brainstorming spirit; I expect that some of these ideas are bad, and that most of the good ones aren’t great and/or won’t happen anyway (because any given idea is hard to set up). I mostly have the EAIF and LTFF in mind, but these ideas could also be relevant to AWF or other EA funders.
EDITED TO ADD: In retrospect, I wish I’d been more creative & ambitious when making this, and maybe fleshed things out more.
I’ve put the ideas in descending order by how excited I currently feel about them. Feel free to skim or skip around.
I’d appreciate comments on the ideas, especially on:
How good or bad does the idea seem to you?
Do you know of someone who might be able to do the things in the first section if EA Funds gave the person money?
Do you know of something useful the orgs/people in the second section might be able to do with more money?
Any thoughts on the best way to approach these things, downside risks to consider, people who might have leads or give good advice?
Are there any ideas you’d particularly like / not like me to write about on the EA Forum? (By default, I might turn some subset of these ideas into one or posts/shortforms later.)
(I also previously collected ideas here, and might integrate them into this doc or the active grantmaking ideas spreadsheet later.)
Projects it might be good for some org/person to do
Offering prizes for things we think should be done
I.e., saying what we want to be done, then paying people if and when they show us they’ve now done it, rather than paying people in advance for things they propose to do
I’m guessing this has been discussed before and there are good reasons this hasn’t been done, aside from just no one having thought about it much or committed to trying it?
Since I have that guess, I haven’t bothered thinking & writing more about why this might be good, might be bad, how to do it, etc. But I’ll probably do that if it turns out my guess was probably wrong.
None of these ideas are “shovel-ready” (e.g., I didn’t list in that post people who could spearhead them), and some are fairly high-level/under-specified, but they could be starting points
Some of the ideas seem more promising than others; I put them in descending order of promisingness
Though I wasn’t specifically thinking from the perspective of a grantmaker, and if I was my order may have differed a little
Maybe I’ll later generate more specific ideas from that list and add them to this doc
Covering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similar
Who could the recipients of the courses/books be?
EA people/orgs that are already doing management or may do so in future
Organisers of research training programs, like SERI, CHERI, CERI
They could then draw on the training when providing advice to the external mentors they pair program participants with
Currently it seems to me that fairly little guidance is given to the mentors, and the organisers are often ~uni students themselves (so they don’t necessarily have any management experience)
On the other hand, I’m not sure how well this “trickle-down training” approach would work, and management and mentorship are somewhat different anyway
Could also do this for other “work skills” or “org strategy” or whatever books, not just management books
E.g., Deep Work
Theory of change:
EA is to some extent constrained by management, mentorship, and organisational capacity
Non-EAs have already developed lots of useful resources on these topics
People (including EAs) often don’t access these resources by default
They might simply not think to do so
They may think to do so but then not do so due to inertia, whereas they would if the cost was reduced to 0 and someone had clearly signalled that this is worth doing or it was “already paid for”
They may think to do so but then see the cost as prohibitive
Courses do seem to me kind of “too expensive”, even though if I really think about it I realise that the time cost of attending is probably a substantially bigger deal than the dollar cost, such that if it’s worth the time it’s probably worth the money too
(Some courses aren’t worth the time, though, of course)
Possible downsides:
Opportunity cost of the time spent in the courses or reading the books
At least some parts of the advice provided by these courses, books, etc. is bad, so people would be learning at least some bad thing
Part of what I have in mind is that the epistemics of the sort of people who produce these resources does seem to typically be worse than the epistemics of the EAs who would be recipients of this stuff (e.g., managers at orgs we think are worth supporting)
But if people spend at least a little time looking into which courses/books to go with, and seek recommendations from other EAs, it seems very hard to believe that people would be left with worse beliefs than they’d otherwise get
Maybe something like “Causing too much homogeneity in management practices”
But I’m not actually sure how the homogeneity itself would be bad
And this seems in any case avoidable by simply covering the costs of a range of courses, books, etc. and letting people pick for themselves
Who could give advice?
Michelle
She wrote a relevant post that I forgot about till I’d mostly written this idea...
People at RP
Other EAs who run orgs or do management
Probably a bunch of other people
Who could be the project lead?
People’s thoughts on this:
Misc:
An alternative idea would be to get some EAs to produce some resources like this
I think doing small versions of that alongside this main idea would be good
E.g., encouraging somewhat more EA org leaders and managers to write up their learnings and tips in docs/posts and giving workshops now and then
But I doubt that we should aim to have EAs actually produce courses and books
The capable EAs’ opportunity cost seems very high
This is something a lot of non-EAs work on and have pretty good incentives to do a good job of
Covering the costs for EA people/orgs to go through non-EA courses on things like work skills and running orgs, get books on those things, or similar
This is basically the same sort of idea as “Covering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similar”, just with different topics focused on
Also, since there’s overlap between the topics and between the delivery mechanisms, a single project could cover both things at once
This idea has basically the same description, theory of change, etc. as that one
I think Ozzie has written in some places about somewhat similar things regarding being a good member of a board of advisors or running an org well
Supporting “student journalism” that’s EA-relevant and/or is by EAs
Description:
Somehow using funding to make EA types trying out “student journalism” easier, more common, or more successful.
I’m not sure exactly what “student journalism” typically means, nor what I’d see as the best focus.
What I have in mind is definitely not necessarily just “news”; could also include things analogous to Guardian Long Reads, cultural essays, reviews, and listicles
So maybe it’s also “student-produced magazines”, or “student-produced written content for a wide audience”
I mostly have in mind university students, but it could make sense to do this for high school students too
I think I’d want the writing to be done by people who are at least somewhat engaged with EA
But it’s possible it could be good to have it done by non-EAs who seem like the sort of people who’d become EAs if they think and write about EA-related topics for a while
Could be getting students to contribute to existing publications or getting new publications to be created
Avital: “do you mean you want students to contribute to their schools’ existing student papers, or found new ones? Founding new ones can be rough because they don’t come with natural readerships, so it is potentially a lot of work for low payoff”
Me: “I think I’m open to either approach. I think maybe actually I wouldn’t mind almost no readers, since I think most of the benefit is as a pipeline for future proper journalists? I want readers mostly inasmuch as that keeps the writers motivated, allows them more feedback, etc. Readership is also more directly useful, but I think the direct impacts are less important to the pipeline stuff.”
I’m not sure what the best way to use money for this is
Could suggest that community builders try to encourage and supporting students in doing this, and provide funding for community builders who can do that
Could fund some students to pay for their time and other expenses when trying this out themselves
Could fund students to go through whatever training would be helpful
Could grant to an existing student newspaper/magazine so that they start a new “vertical” or section or whatever focused on relevant topics? But I don’t think they pay staff, so I don’t think this makes sense?
Could structure the process in a “contest” kind of way to promote the kind of writing you would like to see created
(Angela’s idea)
There are already some other “prizes” for EA type writing, like the Forethought Institute’s one, but I think they’re usually more research focused, and maybe a more public-facing-writing version could be cool too.
Owen: “Yeah I quite like that. Also if there’s one which is specifically aimed at student journalists, they might feel more like they have a shot and therefore pull to enter”
Theory of change:
Possible direct impacts:
EA movement building and more generally spreading useful ideas/info.
Could help the relevant uni (or high school) group attract people, since they now seem more active and interesting, people are getting exposed to sympathetic treatments of relevant ideas, or people are just more likely to hear about them
Could improve retention by “giving people something to do” (see also “Task Y”)
Possibly larger indirect impact: Serve as a pipeline for future Future Perfects, BBC Futures, etc.
Help more young EAs test fit for journalism
Help them build career capital
Then they could start new verticals or whatever in established media outlets, or start new outlets, or just try to cover important topics well and with good angles in regular journalism jobs
Possible downsides:
Fin: “I guess the (most obvious) risk is that this dilutes the quality of the overall EA journo-sphere in a potentially harmful way. Worst case is that silly or wrong things are said and associated with EA / longtermism and cause harm / put people off?”
Luca: “+1 on Fin’s point. I think a large part of what makes Future Perfect and OWID great is that they are really careful and accurate—not something I’d personally associate with student journalism”
Me: Agreed. I’d probably want it to not be explicitly EA/longtermism branded, and instead just cover the same sort of ideas. Like how Future Perfect is.
Also what I have in mind also includes things like student-produced magazines that have things more like Guardian Long Reads or essays on culture stuff, which I think have less of this downside than more news-style student journalism
Target audience:
Who could give advice?
Sky
Nicole
Future Perfect people
BBC Futures people
Who could be the project lead?
Some community builder?
Someone who organises and advises community builders?
E.g., Emma Abele?
People’s thoughts on this:
Owen: “I think it could be a cool thing, but it’s not obvious to me how to use money to cause it to happen”
(Though this was before Angela’s prize suggestion, which Owen liked)
Giving EA researchers/orgs money to pay for external expert review of their work
Description:
Providing money to pay for the sort of external expert review OP already gets for a bunch of their own work
I think RP will be doing this in future
The experts could be academics but don’t necessarily have to be
It’s probably best if the experts are non-EAs
Reasons:
Usually the people with the most expertise on a topic (even if not those with the best judgement etc.) are non-EAs
Non-EAs opportunity cost from our perspective is usually lower
EAs often give review for free already
Non-EAs bring a more distinct perspective and body of knowledge to bear, increasing the marginal value of their input compared to just what the author and maybe other reviewers thought
But it could also make sense to make it easier to pay for EAs to review things in detail
Partly based on the general principle that it often makes sense to pay for services that are valuable
Partly because that could increase the chance that things are actually reviewed in detail, rather than there being lots of superficial reviews that felt to the review like just supererogatory acts unrelated to their actual work
Theory of change:
Increase the quality of EA research outputs
Increase the quality of EA researchers via these paid reviews working like high-quality feedback to them on what they got right and wrong and how they could change their approach in future
Less important / more speculative:
Field-building via causing non-EA experts to be repeatedly exposed to important EA research outputs; they may then become interested in engaging with such topics/work more
Increasing the reputation/perceived quality of EA research outputs via the mere info that it was reviewed, in addition to any increase that occurs via actual increase in quality
This seems most relevant for non-EA audiences
This seems bad if the increase is to a higher level than the work warrants
This seems good if the increase is up to the appropriate level, whereas otherwise the reputation of the work would’ve been overly penalised for not having an impressive-sounding reviewer
E.g., maybe economists would pay too little attention to a report about TAI and the economy unless it says it was reviewed by an economist, even if the methodology and conclusions were already sound
Possible downsides:
Slow down research outputs
Especially in calendar time, due to waiting for the feedback
Also in number of hours required, due to reading and reacting to the feedback
Decrease the quality of research outputs
E.g. via pushing outputs too far away from “speculations” or “weirdness” that were actually sound
E.g., via making people put less effort into seeking or providing reviews from EAs than they otherwise would’ve
Increase the reputation of some work to too high a level
Open questions:
How much would this cost per output?
How much would these reviews improve output?
At what stage in the research process should such reviews occur?
Should EA Funds just provide unrestricted funding that can be used for this, provide funding restricted to this but with no more specific restrictions, provide funding for this for specific pieces of work or reviewers, or provide funding for specific reviewers to do this for whatever orgs ask them to do it?
The last idea seems bad
The rest seem reasonable
How many orgs have work that’s important enough to warrant this but aren’t already paying for it?
Who could give advice?
Open Phil
RP
People in academia?
Probably a bunch of other people?
Who could be the project lead?
N/A
People’s thoughts on this:
Misc:
I think the way to make this happen would be one or both of:
Publicly communicate that EA Funds is in general open to paying for such things
Actively encourage specific orgs/people to apply for funding for such things
I don’t think “making this happen’ needs to be a project with a project lead
“Pay Metaculus and Givewell to run a forecasting competition where Metaculus forecast GiveWell evaluations. Forecasters would guess the final value for the cost per life saved number that GiveWell would reach if they were to evaluate.
Slightly shakier, but GiveWell would then evaluate any which are more effective than GiveDirectly.”
(That was the basis of me adding this idea to this doc.)
This could be done for other evaluators too (e.g., ACE, HLI, maybe Nuno/QURI)
This would probably require that GiveWell commit to evaluating a random subset of the interventions/orgs included (in addition to the ones that are forecasted to be promising)
Theory of change:
The idea-suggester wrote:
“) Cheap search. It would cheaply test if there is a way to cheaply recommend good candidates for GiveWell evaluation.
2) Wide search. It might find charities which make their way onto Givewell’s top charities which would not have otherwise been seen.
3) Forecasting and evaluation. We would better understand if forecasting can predict charity evaluation. This might open up cheaper or wider evaluation opportunities in future.”
Subsidise creators of EA-aligned podcasts, videos, etc. to outsource some tasks (e.g., editing)
Description:
Types of tasks that might be outsourceable:
Editing
Transcript-making
Animations
Producing?
Marketing?
Description-writing?
Who could be outsourced to:
EAs who have more of a comparative advantage for these tasks than the creators of the EA-aligned content do
This could be due to these people being more junior, less skilled at other activities, or more skilled at these up-for-outsourcing activities
Non-EAs
Creators this might be relevant to:
(It could be worth looking at lists of EA-related podcasts and EA-related video sources to think about which ones should have some tasks outsourced but probably don’t already. What I’ve listed is just off the top of my head.)
Hear This Idea
Rational Animation?
Happier World?
Theory of change:
Free the creators up to create more
Free the creators up to do more stuff on the side
E.g., it would suck if Spencer Greenberg did his own podcast editing, even if that didn’t reduce how rapidly he produced podcast eps
E.g., Fin and Luca of Hear This Idea do their own editing, which presumably leaves them less time for the other RSP-related things they do (e.g., assisting Toby Ord, building AI policy career capital)
Lead to higher quality content
One could outsource to specialists
Possible downsides:
Open questions:
How many things can be outsourced easily? How much time do they take up by default?
How many creators are creating useful stuff or are on track to do so, aren’t yet outsourcing some tasks, but would do so if given more money?
Who could give advice?
Creators
Who could be the project lead?
N/A
People’s thoughts on this:
Misc:
I think the way to make this happen would be one or both of:
Publicly communicate that EA Funds is in general open to paying for such things
Actively encourage specific creators to apply for funding for such things
I don’t think “making this happen’ needs to be a project with a project lead
More expert elicitation, surveys, double cruxes, etc. on important topics
Description:
This is a pretty vague/broad idea
It was inspired by me liking Carlier et al.’s AI risk survey and thinking I might be keen to see more such things
The things linked to/mentioned above provide some thoughts on why this could be useful
Possible downsides:
Some of these ideas require using the time of people with high opportunity cost (e.g., AI alignment researchers filling in surveys)
Could lead to more anchoring / over-deference
But I think actually this’d mostly push in the opposite direction by making it more obvious how much disagreement and uncertainty there is
And when there really is a wide degree of agreement, e.g. on AI risk vs asteroid risk, this does seem like something I’d like more people to know about and defer to
And some of these ideas involve getting at underlying rationales, cruxes, etc., not just bottom-line beliefs
Open questions:
How best to use money to create this?
Prizes?
Unrestricted funding to people who’ve done useful work like this in the past?
Request for proposals that are along these lines?
Who could give advice?
Me
Other people who did things like the above-mentioned projects
Linch’s forecasting ideas doc contains a somewhat similar idea, so I’m deprioritising thinking more about this myself for now, but I might return to it later
“Intro to EA Research Hackathon”
Peter’s idea
Original Slack message:
“Random idea I haven’t thought out but seems like something you two [Michael and Linch] would both like—hosting an “Intro to EA Research Hackathon” (or “Intro to EA Research Festival” or another name) perhaps over four Saturdays or something, where feedback is given between each day, with the goal of making an EA Forum post. e.g.,
Day 1: Make a research agenda
Day 2: Refine your research agenda based on feedback
Day 3: Make some progress on your research
Day 4: Make a post on the EA Forum
We’d pair each person with a mentor and there would be a 1-2 week gap between the days to allow time for feedback to be given. People could still work on the project outside of the Hackathon days.
Perhaps we could select people through a mix of (a) inviting our top intern applicants that don’t make it to the internship, (b) inviting some people who narrowly didn’t make it to Stage 2 to do Stage 2, and (c) using our Stage 1 and 2 applications… we could also have a lottery component or something.
This would help new researchers practice making progress on important research and actually build them a precious credential to use for future research hiring. We’d also get great feedback on the quality of researchers that we could use for future hiring.
The idea is to open up something lower cost and higher volume to add even more than an internship, since even the internship is too competitive.”
Subsidise/cover useful apps, software subscriptions, or similar
Description:
What are some things we might want to subsidise/cover?
Roam
Asana
Audible
SavvyCal/Calendly
Guesstimate
Paid Slack accounts?
Paid Airtable accounts?
For whom might we subsidise/cover such things?
“EAs in general”? How to define?
Attendees of EA events?
Members of core EA orgs?
Participants in EA-aligned research training programs?
Some other group?
How would we do this?
Pay the company and get a promo code
How to distribute the promo code such that it’s not overly exclusive but also doesn’t end up e.g. on Reddit and then being used by lots of non-EAs?
Pay EAs and trust they’ll use the money this way
Pay orgs, research training programs, etc. to get group plans for their members
(I know Remmelt did this with Asana, so could find out what he did)
How much of a subsidy might we want to provide?
Partial or full?
For how long?
Theory of change:
Boost people’s productivity
Make them more effective, intelligent, etc.
E.g., Roam and Audible might do this
Save people time they’d otherwise spend finding deals etc.
Why would those impacts occur?
It seems to me, and I’ve often heard it remarked, that people are weirdly averse to paying for app-like things or software subscriptions, relative to their willingness to pay for other things and to the amount of value these things provide
Seems like this is partly just that people are used to the idea of these things being free or super cheap
I think this leads to some/many EAs not using these things even though they’d be useful, using inferior alternatives/versions, or spending time trying to find ways to not pay or pay less
E.g., until earlier this year, I was regularly spending a little time stopping and starting a few audible subscriptions to save something like $15/month
(That said, I wasn’t spending much time)
E.g., an RP intern spent a while thinking they should use Roam but not using it because they were trying to figure out if they could get a subsidised version
Possible downsides:
Open questions:
Who could give advice?
Remmelt?
Ozzie?
The Roam guy?
Who could be the project lead?
People’s thoughts on this:
Misc:
Template
Description:
Theory of change:
Possible downsides:
Open questions:
Who could give advice?
Who could be the project lead?
People’s thoughts on this:
Misc:
Orgs/people that might be able to turn money into impact somehow
This section focuses on ideas where I started with a thought like “This org/person has done useful stuff in the past / seems on track to do so in future. Maybe if we give them more money they’ll do more useful stuff?”
I’ve come up with some specific ideas for what I might want to suggest some of the orgs/people do, but really I might want interactions with most of them to start with asking for their thoughts on whether and how they could use more money to create more impact
In some cases, the best move is probably simply to contact the people to tell them about EAIF/LTFF and suggest they apply, or to post such a message in a relevant Slack workspace or Facebook group or whatever
In some cases, the best move might actually be to try to find some other person/org to try to replicate something like what this person/org did, or a variant of that
E.g., finding someone else who can start another thing like Our World in Data, but with a different focus
What sort of things might I want them to use money for?
Just expand/scale in general?
Do something analogous to how Vox made a new “vertical” for Future Perfect?
Like a new department or focus area
Sketch of what this could look like:
One of the buttons on the bar at the top of the OWID that says the name of some broad topic area relevant to EA, or something vaguer like Future Perfect
The stuff in that area is more EA-relevant than average, and has a similar theme or angle or something. e.g., maybe it’s all focused on things relevant to x-risks
At least one OWID staff member is primarily focused on producing that sort of content.
It’s still the same sort of content as OWID’s regular stuff.
E.g., they don’t have a finished page on nuclear weapons, and I don’t think they have ones on bioweapons or AI. I want them to have that.
We could either ask them to make those things specifically, or ask them to set up something like how Future Perfect works within Vox that will regularly produce that sort of thing.
Luca: “I talked with Edouard Mathieu (from OWID) about this, and know he’s thinking about what data is relevant for longtermism (my impression from him is that it seems quite hard for certain EA topics since lots of the worries are unprecedented and thus don’t really have data on them)”
Who else could give advice on this?
People’s thoughts on this:
Misc:
EA research training program participants
E.g., SERI fellows, RP interns, CHERI fellows, LPP fellows
I’ll post an encouragement in the RP slack in July suggesting the RP interns consider applying for funding
Template
What sort of things might I want them to use money for?
Why do I think they might be able to do useful things with money?
Theory of change:
Possible downsides:
Open questions:
Who at this org might be good to contact about this?
Project ideas / active grantmaking ideas I collected
Context: What follows is a copy of a doc I made quickly in June/July 2021. Someone suggested I make it into a Forum post. But I think there are other better project idea lists, and more coming soon. And these ideas aren’t especially creative, ambitious, or valuable, and I don’t want people to think that they should set their sights as low as I accidentally did here. And this is now somewhat outdated in some ways. So I’m making it just a shortform rather than a top-level post, and I’m not sure whether you should bother reading it.
There are some interesting comment threads in the doc version.
I’m using this doc to collect “active grantmaking” ideas (i.e., things I’d maybe want EA funders to proactively find a way to fund, rather than waiting for an application). I’m approaching this in a brainstorming spirit; I expect that some of these ideas are bad, and that most of the good ones aren’t great and/or won’t happen anyway (because any given idea is hard to set up). I mostly have the EAIF and LTFF in mind, but these ideas could also be relevant to AWF or other EA funders.
EDITED TO ADD: In retrospect, I wish I’d been more creative & ambitious when making this, and maybe fleshed things out more.
I’ve put the ideas in descending order by how excited I currently feel about them. Feel free to skim or skip around.
I’d appreciate comments on the ideas, especially on:
How good or bad does the idea seem to you?
Do you know of someone who might be able to do the things in the first section if EA Funds gave the person money?
Do you know of something useful the orgs/people in the second section might be able to do with more money?
Any thoughts on the best way to approach these things, downside risks to consider, people who might have leads or give good advice?
Are there any ideas you’d particularly like / not like me to write about on the EA Forum? (By default, I might turn some subset of these ideas into one or posts/shortforms later.)
(I also previously collected ideas here, and might integrate them into this doc or the active grantmaking ideas spreadsheet later.)
Projects it might be good for some org/person to do 2
Offering prizes for things we think should be done 2
Ideas from Intervention options for improving the EA-aligned research pipeline 2
Covering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similar 3
Covering the costs for EA people/orgs to go through non-EA courses on things like work skills and running orgs, get books on those things, or similar 5
Supporting “student journalism” that’s EA-relevant and/or is by EAs 5
Giving EA researchers/orgs money to pay for external expert review of their work 7
Red teaming papers as an EA training exercise 9
Buck’s book review idea 10
New things kind-of like Our World in Data 10
Forecasting tournaments amplifying evaluation research 10
Subsidise creators of EA-aligned podcasts, videos, etc. to outsource some tasks (e.g., editing) 11
More expert elicitation, surveys, double cruxes, etc. on important topics 12
Ideas related to IGM-style expert panels 14
“Intro to EA Research Hackathon” 14
Subsidise/cover useful apps, software subscriptions, or similar 15
Orgs/people that might be able to turn money into impact somehow 16
Our World in Data 17
EA research training program participants 18
Projects it might be good for some org/person to do
Offering prizes for things we think should be done
I.e., saying what we want to be done, then paying people if and when they show us they’ve now done it, rather than paying people in advance for things they propose to do
I’m guessing this has been discussed before and there are good reasons this hasn’t been done, aside from just no one having thought about it much or committed to trying it?
Since I have that guess, I haven’t bothered thinking & writing more about why this might be good, might be bad, how to do it, etc. But I’ll probably do that if it turns out my guess was probably wrong.
See also Prize—EA Forum and Certificate of impact—EA Forum
Ideas from Intervention options for improving the EA-aligned research pipeline
See also Buck’s reactions
None of these ideas are “shovel-ready” (e.g., I didn’t list in that post people who could spearhead them), and some are fairly high-level/under-specified, but they could be starting points
Some of the ideas seem more promising than others; I put them in descending order of promisingness
Though I wasn’t specifically thinking from the perspective of a grantmaker, and if I was my order may have differed a little
Maybe I’ll later generate more specific ideas from that list and add them to this doc
Covering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similar
(Michelle’s post Training Bottlenecks in EA (professional skills) is relevant, but I haven’t read it in a few months and so am probably reinventing/ignoring some wheels here.)
Description:
Who could the recipients of the courses/books be?
EA people/orgs that are already doing management or may do so in future
Organisers of research training programs, like SERI, CHERI, CERI
They could then draw on the training when providing advice to the external mentors they pair program participants with
Currently it seems to me that fairly little guidance is given to the mentors, and the organisers are often ~uni students themselves (so they don’t necessarily have any management experience)
On the other hand, I’m not sure how well this “trickle-down training” approach would work, and management and mentorship are somewhat different anyway
What course(s) should be paid for?
I think The Management Center training would be fine
This is what RP used
It seemed good but flawed
Saulius and Linch give their thoughts here
I could share my notes if that’d be helpful
But I haven’t looked into options at all, and it’s very plausible something else would be better
Might be best to tell orgs they can choose whatever course they think is best and we’ll likely pay for it
What could covering the costs of relevant books look like?
Could pay for hard copies, ebooks, or audiobooks, and just give them to orgs, without asking them first
Could tell orgs we’d like to pay them to get this stuff, then let them apply for whatever form of it and whatever books they want
What books should be used?
There would be many reasonable book choices
See e.g. my list of Management-related books [shared]
Could also do this for other “work skills” or “org strategy” or whatever books, not just management books
E.g., Deep Work
Theory of change:
EA is to some extent constrained by management, mentorship, and organisational capacity
Non-EAs have already developed lots of useful resources on these topics
People (including EAs) often don’t access these resources by default
They might simply not think to do so
They may think to do so but then not do so due to inertia, whereas they would if the cost was reduced to 0 and someone had clearly signalled that this is worth doing or it was “already paid for”
They may think to do so but then see the cost as prohibitive
Courses do seem to me kind of “too expensive”, even though if I really think about it I realise that the time cost of attending is probably a substantially bigger deal than the dollar cost, such that if it’s worth the time it’s probably worth the money too
(Some courses aren’t worth the time, though, of course)
Possible downsides:
Opportunity cost of the time spent in the courses or reading the books
At least some parts of the advice provided by these courses, books, etc. is bad, so people would be learning at least some bad thing
Part of what I have in mind is that the epistemics of the sort of people who produce these resources does seem to typically be worse than the epistemics of the EAs who would be recipients of this stuff (e.g., managers at orgs we think are worth supporting)
But if people spend at least a little time looking into which courses/books to go with, and seek recommendations from other EAs, it seems very hard to believe that people would be left with worse beliefs than they’d otherwise get
Maybe something like “Causing too much homogeneity in management practices”
But I’m not actually sure how the homogeneity itself would be bad
And this seems in any case avoidable by simply covering the costs of a range of courses, books, etc. and letting people pick for themselves
Who could give advice?
Michelle
She wrote a relevant post that I forgot about till I’d mostly written this idea...
People at RP
Other EAs who run orgs or do management
Probably a bunch of other people
Who could be the project lead?
People’s thoughts on this:
Misc:
An alternative idea would be to get some EAs to produce some resources like this
I think doing small versions of that alongside this main idea would be good
E.g., encouraging somewhat more EA org leaders and managers to write up their learnings and tips in docs/posts and giving workshops now and then
But I doubt that we should aim to have EAs actually produce courses and books
The capable EAs’ opportunity cost seems very high
This is something a lot of non-EAs work on and have pretty good incentives to do a good job of
Covering the costs for EA people/orgs to go through non-EA courses on things like work skills and running orgs, get books on those things, or similar
This is basically the same sort of idea as “Covering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similar”, just with different topics focused on
Also, since there’s overlap between the topics and between the delivery mechanisms, a single project could cover both things at once
This idea has basically the same description, theory of change, etc. as that one
I think Ozzie has written in some places about somewhat similar things regarding being a good member of a board of advisors or running an org well
Supporting “student journalism” that’s EA-relevant and/or is by EAs
Description:
Somehow using funding to make EA types trying out “student journalism” easier, more common, or more successful.
I’m not sure exactly what “student journalism” typically means, nor what I’d see as the best focus.
What I have in mind is definitely not necessarily just “news”; could also include things analogous to Guardian Long Reads, cultural essays, reviews, and listicles
So maybe it’s also “student-produced magazines”, or “student-produced written content for a wide audience”
I mostly have in mind university students, but it could make sense to do this for high school students too
I think I’d want the writing to be done by people who are at least somewhat engaged with EA
But it’s possible it could be good to have it done by non-EAs who seem like the sort of people who’d become EAs if they think and write about EA-related topics for a while
Could be getting students to contribute to existing publications or getting new publications to be created
Avital: “do you mean you want students to contribute to their schools’ existing student papers, or found new ones? Founding new ones can be rough because they don’t come with natural readerships, so it is potentially a lot of work for low payoff”
Me: “I think I’m open to either approach. I think maybe actually I wouldn’t mind almost no readers, since I think most of the benefit is as a pipeline for future proper journalists? I want readers mostly inasmuch as that keeps the writers motivated, allows them more feedback, etc. Readership is also more directly useful, but I think the direct impacts are less important to the pipeline stuff.”
I’m not sure what the best way to use money for this is
Could suggest that community builders try to encourage and supporting students in doing this, and provide funding for community builders who can do that
Could fund some students to pay for their time and other expenses when trying this out themselves
Could fund students to go through whatever training would be helpful
Could grant to an existing student newspaper/magazine so that they start a new “vertical” or section or whatever focused on relevant topics? But I don’t think they pay staff, so I don’t think this makes sense?
Could structure the process in a “contest” kind of way to promote the kind of writing you would like to see created
(Angela’s idea)
There are already some other “prizes” for EA type writing, like the Forethought Institute’s one, but I think they’re usually more research focused, and maybe a more public-facing-writing version could be cool too.
Owen: “Yeah I quite like that. Also if there’s one which is specifically aimed at student journalists, they might feel more like they have a shot and therefore pull to enter”
Theory of change:
Possible direct impacts:
EA movement building and more generally spreading useful ideas/info.
Could help the relevant uni (or high school) group attract people, since they now seem more active and interesting, people are getting exposed to sympathetic treatments of relevant ideas, or people are just more likely to hear about them
Could improve retention by “giving people something to do” (see also “Task Y”)
Possibly larger indirect impact: Serve as a pipeline for future Future Perfects, BBC Futures, etc.
Help more young EAs test fit for journalism
Help them build career capital
Then they could start new verticals or whatever in established media outlets, or start new outlets, or just try to cover important topics well and with good angles in regular journalism jobs
Possible downsides:
Fin: “I guess the (most obvious) risk is that this dilutes the quality of the overall EA journo-sphere in a potentially harmful way. Worst case is that silly or wrong things are said and associated with EA / longtermism and cause harm / put people off?”
Luca: “+1 on Fin’s point. I think a large part of what makes Future Perfect and OWID great is that they are really careful and accurate—not something I’d personally associate with student journalism”
Me: Agreed. I’d probably want it to not be explicitly EA/longtermism branded, and instead just cover the same sort of ideas. Like how Future Perfect is.
Also what I have in mind also includes things like student-produced magazines that have things more like Guardian Long Reads or essays on culture stuff, which I think have less of this downside than more news-style student journalism
Target audience:
Who could give advice?
Sky
Nicole
Future Perfect people
BBC Futures people
Who could be the project lead?
Some community builder?
Someone who organises and advises community builders?
E.g., Emma Abele?
People’s thoughts on this:
Owen: “I think it could be a cool thing, but it’s not obvious to me how to use money to cause it to happen”
(Though this was before Angela’s prize suggestion, which Owen liked)
Giving EA researchers/orgs money to pay for external expert review of their work
Description:
Providing money to pay for the sort of external expert review OP already gets for a bunch of their own work
I think RP will be doing this in future
The experts could be academics but don’t necessarily have to be
It’s probably best if the experts are non-EAs
Reasons:
Usually the people with the most expertise on a topic (even if not those with the best judgement etc.) are non-EAs
Non-EAs opportunity cost from our perspective is usually lower
EAs often give review for free already
Non-EAs bring a more distinct perspective and body of knowledge to bear, increasing the marginal value of their input compared to just what the author and maybe other reviewers thought
But it could also make sense to make it easier to pay for EAs to review things in detail
Partly based on the general principle that it often makes sense to pay for services that are valuable
Partly because that could increase the chance that things are actually reviewed in detail, rather than there being lots of superficial reviews that felt to the review like just supererogatory acts unrelated to their actual work
Theory of change:
Increase the quality of EA research outputs
Increase the quality of EA researchers via these paid reviews working like high-quality feedback to them on what they got right and wrong and how they could change their approach in future
Less important / more speculative:
Field-building via causing non-EA experts to be repeatedly exposed to important EA research outputs; they may then become interested in engaging with such topics/work more
Increasing the reputation/perceived quality of EA research outputs via the mere info that it was reviewed, in addition to any increase that occurs via actual increase in quality
This seems most relevant for non-EA audiences
This seems bad if the increase is to a higher level than the work warrants
This seems good if the increase is up to the appropriate level, whereas otherwise the reputation of the work would’ve been overly penalised for not having an impressive-sounding reviewer
E.g., maybe economists would pay too little attention to a report about TAI and the economy unless it says it was reviewed by an economist, even if the methodology and conclusions were already sound
Possible downsides:
Slow down research outputs
Especially in calendar time, due to waiting for the feedback
Also in number of hours required, due to reading and reacting to the feedback
Decrease the quality of research outputs
E.g. via pushing outputs too far away from “speculations” or “weirdness” that were actually sound
E.g., via making people put less effort into seeking or providing reviews from EAs than they otherwise would’ve
Increase the reputation of some work to too high a level
Open questions:
How much would this cost per output?
How much would these reviews improve output?
At what stage in the research process should such reviews occur?
Should EA Funds just provide unrestricted funding that can be used for this, provide funding restricted to this but with no more specific restrictions, provide funding for this for specific pieces of work or reviewers, or provide funding for specific reviewers to do this for whatever orgs ask them to do it?
The last idea seems bad
The rest seem reasonable
How many orgs have work that’s important enough to warrant this but aren’t already paying for it?
Who could give advice?
Open Phil
RP
People in academia?
Probably a bunch of other people?
Who could be the project lead?
N/A
People’s thoughts on this:
Misc:
I think the way to make this happen would be one or both of:
Publicly communicate that EA Funds is in general open to paying for such things
Actively encourage specific orgs/people to apply for funding for such things
I don’t think “making this happen’ needs to be a project with a project lead
Red teaming papers as an EA training exercise
See Linch’s shortform and my comments there
Buck’s book review idea
See Buck’s shortform and the discussion there
New things kind-of like Our World in Data
See also the section on Our World in Data below.
Description:
Basically suggested by TJ in this thread:
Theory of change:
Possible downsides:
Open questions:
Is this better than just funding Our World in Data (in general or for specific activities)?
Who could give advice?
Who could be the project lead?
People’s thoughts on this:
Misc:
Forecasting tournaments amplifying evaluation research
Description:
Someone wrote the following on the Submit grant suggestions to EA Funds form:
“Pay Metaculus and Givewell to run a forecasting competition where Metaculus forecast GiveWell evaluations. Forecasters would guess the final value for the cost per life saved number that GiveWell would reach if they were to evaluate.
Slightly shakier, but GiveWell would then evaluate any which are more effective than GiveDirectly.”
(That was the basis of me adding this idea to this doc.)
This could be done for other evaluators too (e.g., ACE, HLI, maybe Nuno/QURI)
This would probably require that GiveWell commit to evaluating a random subset of the interventions/orgs included (in addition to the ones that are forecasted to be promising)
Theory of change:
The idea-suggester wrote:
“) Cheap search. It would cheaply test if there is a way to cheaply recommend good candidates for GiveWell evaluation.
2) Wide search. It might find charities which make their way onto Givewell’s top charities which would not have otherwise been seen.
3) Forecasting and evaluation. We would better understand if forecasting can predict charity evaluation. This might open up cheaper or wider evaluation opportunities in future.”
I think this would be an example of the more general idea of amplifying generalist research via forecasting
Possible downsides:
Open questions:
Who could give advice?
Metaculus
Ozzie
Linch
Other forecasting people
Who could be the project lead?
Metaculus
QURI?
Other forecasting people?
People’s thoughts on this:
Misc:
See also Replacing part of our research process with Metaculus tournaments and Ways we could integrate forecasting/Metaculus into our research process and outputs
Subsidise creators of EA-aligned podcasts, videos, etc. to outsource some tasks (e.g., editing)
Description:
Types of tasks that might be outsourceable:
Editing
Transcript-making
Animations
Producing?
Marketing?
Description-writing?
Who could be outsourced to:
EAs who have more of a comparative advantage for these tasks than the creators of the EA-aligned content do
This could be due to these people being more junior, less skilled at other activities, or more skilled at these up-for-outsourcing activities
Non-EAs
Creators this might be relevant to:
(It could be worth looking at lists of EA-related podcasts and EA-related video sources to think about which ones should have some tasks outsourced but probably don’t already. What I’ve listed is just off the top of my head.)
Hear This Idea
Rational Animation?
Happier World?
Theory of change:
Free the creators up to create more
Free the creators up to do more stuff on the side
E.g., it would suck if Spencer Greenberg did his own podcast editing, even if that didn’t reduce how rapidly he produced podcast eps
E.g., Fin and Luca of Hear This Idea do their own editing, which presumably leaves them less time for the other RSP-related things they do (e.g., assisting Toby Ord, building AI policy career capital)
Lead to higher quality content
One could outsource to specialists
Possible downsides:
Open questions:
How many things can be outsourced easily? How much time do they take up by default?
How many creators are creating useful stuff or are on track to do so, aren’t yet outsourcing some tasks, but would do so if given more money?
Who could give advice?
Creators
Who could be the project lead?
N/A
People’s thoughts on this:
Misc:
I think the way to make this happen would be one or both of:
Publicly communicate that EA Funds is in general open to paying for such things
Actively encourage specific creators to apply for funding for such things
I don’t think “making this happen’ needs to be a project with a project lead
More expert elicitation, surveys, double cruxes, etc. on important topics
Description:
This is a pretty vague/broad idea
It was inspired by me liking Carlier et al.’s AI risk survey and thinking I might be keen to see more such things
Could be diving deeper on some AI stuff
Could be other x-risks
Could be other topics
Other examples of the kind of thing I mean:
Database of existential risk estimates (or similar)
Crucial questions for longtermists—EA Forum
Clarifying some key hypotheses in AI alignment
The in-progress AI project with many authors that’s kind-of related to the above post
Some work by Garfinkel
Some work by Ngo
Conversation on forecasting with Vaniver and Ozzie Gooen—EA Forum
Theory of change:
The things linked to/mentioned above provide some thoughts on why this could be useful
Possible downsides:
Some of these ideas require using the time of people with high opportunity cost (e.g., AI alignment researchers filling in surveys)
Could lead to more anchoring / over-deference
But I think actually this’d mostly push in the opposite direction by making it more obvious how much disagreement and uncertainty there is
And when there really is a wide degree of agreement, e.g. on AI risk vs asteroid risk, this does seem like something I’d like more people to know about and defer to
And some of these ideas involve getting at underlying rationales, cruxes, etc., not just bottom-line beliefs
Open questions:
How best to use money to create this?
Prizes?
Unrestricted funding to people who’ve done useful work like this in the past?
Request for proposals that are along these lines?
Who could give advice?
Me
Other people who did things like the above-mentioned projects
Who could be the project lead?
People’s thoughts on this:
Misc:
Ideas related to IGM-style expert panels
I have a separate, short doc on this from ~March: Ideas related to IGM-style expert panels
Linch’s forecasting ideas doc contains a somewhat similar idea, so I’m deprioritising thinking more about this myself for now, but I might return to it later
“Intro to EA Research Hackathon”
Peter’s idea
Original Slack message:
“Random idea I haven’t thought out but seems like something you two [Michael and Linch] would both like—hosting an “Intro to EA Research Hackathon” (or “Intro to EA Research Festival” or another name) perhaps over four Saturdays or something, where feedback is given between each day, with the goal of making an EA Forum post. e.g.,
Day 1: Make a research agenda
Day 2: Refine your research agenda based on feedback
Day 3: Make some progress on your research
Day 4: Make a post on the EA Forum
We’d pair each person with a mentor and there would be a 1-2 week gap between the days to allow time for feedback to be given. People could still work on the project outside of the Hackathon days.
Perhaps we could select people through a mix of (a) inviting our top intern applicants that don’t make it to the internship, (b) inviting some people who narrowly didn’t make it to Stage 2 to do Stage 2, and (c) using our Stage 1 and 2 applications… we could also have a lottery component or something.
This would help new researchers practice making progress on important research and actually build them a precious credential to use for future research hiring. We’d also get great feedback on the quality of researchers that we could use for future hiring.
The idea is to open up something lower cost and higher volume to add even more than an internship, since even the internship is too competitive.”
I see pros and cons
Discussed a bit in this thread: https://rethinkpriorities.slack.com/archives/G01EEQ179LP/p1619038497019100
Subsidise/cover useful apps, software subscriptions, or similar
Description:
What are some things we might want to subsidise/cover?
Roam
Asana
Audible
SavvyCal/Calendly
Guesstimate
Paid Slack accounts?
Paid Airtable accounts?
For whom might we subsidise/cover such things?
“EAs in general”? How to define?
Attendees of EA events?
Members of core EA orgs?
Participants in EA-aligned research training programs?
Some other group?
How would we do this?
Pay the company and get a promo code
How to distribute the promo code such that it’s not overly exclusive but also doesn’t end up e.g. on Reddit and then being used by lots of non-EAs?
Pay EAs and trust they’ll use the money this way
Pay orgs, research training programs, etc. to get group plans for their members
(I know Remmelt did this with Asana, so could find out what he did)
How much of a subsidy might we want to provide?
Partial or full?
For how long?
Theory of change:
Boost people’s productivity
Make them more effective, intelligent, etc.
E.g., Roam and Audible might do this
Save people time they’d otherwise spend finding deals etc.
Why would those impacts occur?
It seems to me, and I’ve often heard it remarked, that people are weirdly averse to paying for app-like things or software subscriptions, relative to their willingness to pay for other things and to the amount of value these things provide
Seems like this is partly just that people are used to the idea of these things being free or super cheap
I think this leads to some/many EAs not using these things even though they’d be useful, using inferior alternatives/versions, or spending time trying to find ways to not pay or pay less
E.g., until earlier this year, I was regularly spending a little time stopping and starting a few audible subscriptions to save something like $15/month
(That said, I wasn’t spending much time)
E.g., an RP intern spent a while thinking they should use Roam but not using it because they were trying to figure out if they could get a subsidised version
Possible downsides:
Open questions:
Who could give advice?
Remmelt?
Ozzie?
The Roam guy?
Who could be the project lead?
People’s thoughts on this:
Misc:
Template
Description:
Theory of change:
Possible downsides:
Open questions:
Who could give advice?
Who could be the project lead?
People’s thoughts on this:
Misc:
Orgs/people that might be able to turn money into impact somehow
This section focuses on ideas where I started with a thought like “This org/person has done useful stuff in the past / seems on track to do so in future. Maybe if we give them more money they’ll do more useful stuff?”
I’ve come up with some specific ideas for what I might want to suggest some of the orgs/people do, but really I might want interactions with most of them to start with asking for their thoughts on whether and how they could use more money to create more impact
In some cases, the best move is probably simply to contact the people to tell them about EAIF/LTFF and suggest they apply, or to post such a message in a relevant Slack workspace or Facebook group or whatever
In some cases, the best move might actually be to try to find some other person/org to try to replicate something like what this person/org did, or a variant of that
E.g., finding someone else who can start another thing like Our World in Data, but with a different focus
Our World in Data
See also New things kind-of like our world in data.
What sort of things might I want them to use money for?
Just expand/scale in general?
Do something analogous to how Vox made a new “vertical” for Future Perfect?
Like a new department or focus area
Sketch of what this could look like:
One of the buttons on the bar at the top of the OWID that says the name of some broad topic area relevant to EA, or something vaguer like Future Perfect
The stuff in that area is more EA-relevant than average, and has a similar theme or angle or something. e.g., maybe it’s all focused on things relevant to x-risks
At least one OWID staff member is primarily focused on producing that sort of content.
It’s still the same sort of content as OWID’s regular stuff.
E.g., they don’t have a finished page on nuclear weapons, and I don’t think they have ones on bioweapons or AI. I want them to have that.
We could either ask them to make those things specifically, or ask them to set up something like how Future Perfect works within Vox that will regularly produce that sort of thing.
Do work on specific topics?
Examples:
AI
Nuclear weapons
They they only have “a preliminary collection of materials” on nuclear weapons
Why do I think they might be able to do useful things with money?
Possible downsides:
Open questions:
What sorts of restricted funding, advice, or encouragement would they be open to?
On the 80k podcast, Roser indicated they much preferred people to give OWID unrestricted funding and let OWID use their own judgement
And I got the impression that maybe in general they might not be open to restricted funding
But maybe they’d be more open to it from EA sources when we do have a really good rationale and it roughly aligns with OWID’s own vision
Is this better than trying to facilitate the creation of new things kind-of like our world in data?
Who at this org might be good to contact about this?
Luca mentioned edouard@ourworldindata.org
https://edomt.github.io/about/
Luca: “I talked with Edouard Mathieu (from OWID) about this, and know he’s thinking about what data is relevant for longtermism (my impression from him is that it seems quite hard for certain EA topics since lots of the worries are unprecedented and thus don’t really have data on them)”
Who else could give advice on this?
People’s thoughts on this:
Misc:
EA research training program participants
E.g., SERI fellows, RP interns, CHERI fellows, LPP fellows
I’ll post an encouragement in the RP slack in July suggesting the RP interns consider applying for funding
Template
What sort of things might I want them to use money for?
Why do I think they might be able to do useful things with money?
Theory of change:
Possible downsides:
Open questions:
Who at this org might be good to contact about this?
Who else could give advice on this?
People’s thoughts on this:
Misc: