Red teaming papers as an EA training exercise?
I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.
I’m not sure how knowledgeable you have to be to do this well, but I suspect it’s approachable for smart people who finish high school, and certainly by the time they finish undergrad^ with a decent science or social science degree.
I think this is good career building for various reasons:
you can develop a healthy skepticism of the existing EA orthodoxy
I mean skepticism that’s grounded in specific beliefs about why things ought to be different, rather than just vague “weirdness heuristics” or feeling like the goals of EA conflict with other tribal goals.
(I personally have not found high-level critiques of EA, and I have read many, to be particularly interesting or insightful, but this is just a personal take).
you actually deeply understand at least one topic well enough to point out errors
For many people and personalities, critiquing a specific paper/blog post may be a less hairy “entry point” into doing EA-research adjacent work than plausible alternatives like trying to form your own deep inside views on questions that are really open-ended and ambiguous like “come up with a novel solution in AI alignment” or “identify a new cause X”
creates legible career capital (at least within EA)
requires relatively little training/guidance from external mentors, meaning
our movement devotes less scarce mentorship resources into this
people with worse social skills/network/geographical situation don’t feel (as much) at a disadvantage for getting the relevant training
you can start forming your own opinions/intuitions of both object-level and meta-level heuristics for what things are likely to be correct vs wrong.
In some cases, the errors are actually quite big, and worth correcting (relevant parts of ) the entire EA movement on.
Main “cons” I can think of:
I’m not aware of anybody successfully doing a really good critique for the sake of doing a really good critique. The most exciting things I’m aware of (publicly, zdgroff’s critique of Ng’s original paper on wild animal suffering, alexrjl’s critique of Giving Green. I also have private examples) mostly comes from people trying to deeply understand a thing for themselves, and then along the way spotting errors with existing work.
It’s possible that doing deliberate “red-teaming” would make one predisposed to spot trivial issues rather than serious ones, or falsely identify issues where there aren’t any.
Maybe critiques are a less important skill to develop than forming your own vision/research direction and executing on it, and telling people to train for this skill might actively hinder their ability to be bold & imaginative?
^ Of course, this depends on field. I think even relatively technical papers within EA are readable to a recent undergrad who cares enough, but this will not be true for eg (most) papers in physics or math.
I think people are overcomplicating this. You should generally follow the law, but to shield against risks that you are being such a stickler in unreasonable ways (trying to avoid “3 felonies a day”), you can just imagine whether uninvolved peers hearing about your actions would think a situation is obviously okay. Some potential ways to think about such peer groups:
What laws people in the country you live in think are absolutely normal and commonplace to break.
For example, bribing police officers is generally illegal, but iiuc in some countries approximately everybody bribes police officers at traffic stops
What laws people in your home country think is illegitimate and thus worth breaking
For example some countries ban homosexuality, but your typical American would not consider it blameworthy to be gay.
What laws other EAs (not affiliated in any way with your organization) think is okay to break.
So far, candidates people gave include ag-gag laws and taking stimulants for undiagnosed ADHD.
FWIW I’m not necessarily convinced that the majority of EAs agree here; I’d like to see polls.
What laws your non-EA friends think is totally okay to break
For example, most college-educated Millennials would not consider downloading papers on Sci-Hub to be blameworthy.
I think a relatively conservative organization should find it permissive to break the law if and only if every possible reasonable peer group considers it acceptable, and a relatively liberal but probably acceptable organization may consider it permissive to break laws if some reasonable peer group considers it acceptable.
In the most problematic examples in question, all the alleged law-breaking I’m aware of (driving without a license[1], pressuring someone to transport recreational drugs across borders) are not likely to be considered readily acceptable by any reasonable peer group, including other EAs.
This is similarly true for other law-breaking scandals I’ve heard about in the past, including theft, fraud, workplace sexual harassment, etc.
Puerto Rico is in the US