Here I’ll keep track of additional intervention ideas that have occurred to me since I finished drafting this post. Perhaps in future I’ll integrate some into the post itself.
Creating and/or improving EA-relevant journals
Could draw more people towards paying attention to important topics
Could make it easier for EAs doing graduate programs (especially PhDs) or pursuing academic careers to focus on high-priority topics and pursue them in the most impactful ways
That could in turn help with “Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.”
Making high-quality data that’s relevant to high-priority topics more easily available
The idea here is that “a lot of researchers will follow good data wherever it comes from”
(This was suggested by a commenter on a draft of this post)
I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.
I’m not sure how knowledgeable you have to be to do this well, but I suspect it’s approachable for smart people who finish high school, and certainly by the time they finish undergrad with a decent science or social science degree.
I think this is good career building for various reasons:
you can develop a healthy skepticism of the existing EA orthodoxy
I mean skepticism that’s grounded in specific beliefs about why things ought to be different, rather than just vague “weirdness heuristics” or feeling like the goals of EA conflict with other tribal goals.
you actually deeply understand at least one topic well enough to point out errors
creates legible career capital (at least within EA)
requires relatively little training/guidance from external mentors, meaning
our movement devotes less scarce mentorship resources into this
people with worse social skills/network/geographical situation don’t feel (as much) at a disadvantage for getting the relevant training
you can start forming your own opinions/intuitions of both object-level and meta-level heuristics for what things are likely to be correct vs wrong.
In some cases, the errors are actually quite big, and worth correcting (relevant parts of ) the EA movement on.
Main “cons” I can think of:
I’m not aware of anybody successfully doing a really good critique for the sake of doing a really good critique. The most exciting things I’m aware of (publicly, zdgroff’s critique of Ng’s original paper on wild animal suffering, alexrjl’s critique of Giving Green. I also have private examples) mostly comes from people trying to deeply understand a thing for themselves, and then along the way spotting errors with existing work.
It’s possible that doing deliberate “red-teaming” would make one predisposed to spot trivial issues rather than serious ones, or falsely identify issues where there aren’t any.
Maybe critiques are a less important skill to develop than forming your own vision/research direction and executing on it, and telling people to train for this skill might actively hinder their ability to be bold & imaginative?
An idea from Buck (see also the comments on the linked shortform itself):
Here’s a crazy idea. I haven’t run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
Basic structure:
Someone picks a book they want to review.
Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
They write a review, and send it to me.
If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
If I don’t want to give them the money, they can do whatever with the review.
What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:
Things directly related to traditional EA topics
Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.
Goals:
I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
It might surface some talented writers and thinkers who weren’t otherwise known to EA.
It might produce good content on the EA Forum and LW that engages intellectually curious people.
Suggested elements of a book review:
One paragraph summary of the book
How compelling you found the book’s thesis, and why
The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
Rough notes on another idea, following a call I just had:
Setting up something in between a research training program and a system for collaborations in high schools, universities, or local EA groups
Less vetting and probably lower average current knowledge, aptitude, etc. than research training program participants undergo/have
But this reduces the costs for vetting
And this opens this up to an additional pool of people (who may not yet be able to pass that vetting)
Plus, this could allow more people to test their fit for and get better at mentorship, by mentoring people in these “programs” or simply by collaborating with peers in these programs (since collaboration still has some mentorship-like elements)
E.g., in some cases, someone’s who just started a PhD student or just recently learned about the cause area they’re now focused on may not be able to usefully serve as a mentor for a participant in a research training program like SERI, but they may be able to usefully serve as a mentor for a high school student or some other undergrads
(I’m just saying there’d be some cases in that space in between—there’d also be some e.g. PhD students who can usefully serve as mentors for SERI fellows, and some who can’t usefullyserve as mentors for high school students)
Additional intervention ideas
Here I’ll keep track of additional intervention ideas that have occurred to me since I finished drafting this post. Perhaps in future I’ll integrate some into the post itself.
Creating and/or improving EA-relevant journals
Could draw more people towards paying attention to important topics
Could make it easier for EAs doing graduate programs (especially PhDs) or pursuing academic careers to focus on high-priority topics and pursue them in the most impactful ways
That could in turn help with “Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.”
Making high-quality data that’s relevant to high-priority topics more easily available
The idea here is that “a lot of researchers will follow good data wherever it comes from”
(This was suggested by a commenter on a draft of this post)
An idea from Linch:
(See also the comments on the shortform.)
An idea from Buck (see also the comments on the linked shortform itself):
Rough notes on another idea, following a call I just had:
Setting up something in between a research training program and a system for collaborations in high schools, universities, or local EA groups
Less vetting and probably lower average current knowledge, aptitude, etc. than research training program participants undergo/have
But this reduces the costs for vetting
And this opens this up to an additional pool of people (who may not yet be able to pass that vetting)
Plus, this could allow more people to test their fit for and get better at mentorship, by mentoring people in these “programs” or simply by collaborating with peers in these programs (since collaboration still has some mentorship-like elements)
E.g., in some cases, someone’s who just started a PhD student or just recently learned about the cause area they’re now focused on may not be able to usefully serve as a mentor for a participant in a research training program like SERI, but they may be able to usefully serve as a mentor for a high school student or some other undergrads
(I’m just saying there’d be some cases in that space in between—there’d also be some e.g. PhD students who can usefully serve as mentors for SERI fellows, and some who can’t usefully serve as mentors for high school students)