Any advice for researchers who want to conduct research similar to Rethink Priorities? or useful resources that you point your researchers towards when they join?
It has been said before elsewhere by Peter, but worth stating again:read and practice Reasoning Transparency . Michael Aird compiled some great resources recently here.
I’d also refer people to Michael and Saulius’ replies to arushigupta’s similar subquestion in last year’s RP AMA.
One thing I’d add is that I think several people at RP and elsewhere would be very excited if someone could:
Find existing resources that work as good training for improving one’s reasoning transparency, and/or
Create such a resource
As far as I’m aware, currently the state of the art is “Suggest people read the post Reasoning Transparency, maybe point them to a couple somewhat related other things (e.g., the compilation I made that Neil links to, or this other compilation I made), hope they absorb it, give them a bunch of feedback when they don’t really (since it’s hard!), hope they absorb that, repeat.” I.e., the state of the art is kinda crappy. (I think Luke’s post is excellent, but just reading it is not generally sufficient for going from not doing the skill well to doing the skill well.)
I don’t know exactly what sort of resources would be best, but I imagine we could do better than what we have now.
I’ve also personally gained a lot from arguing with People Wrong on the Internet, but poor application of this principle may be generally bad for epistemic rigor. In particular, I think it probably helps to have a research blog and be able to do things like spot potential holes in (EA social media, EA forum, research blogs, papers, etc). That said, I think most EA researchers (including my colleagues) are much less Online than I am, so you definitely don’t need to develop an internet argument habit to be a good researcher.
Making lots of falsifiable forecasts about short-term conclusions of your beliefs may be helpful. Calibration training is probably less helpful, but lower cost.
Trying to identify important and tractable (sub)questions is often even more important than the ability to answer them well. In particular, very early on in a research project, try to track “what if I answered this question perfectly? Does it even matter? Will this meaningfully impact anyone’s decisions, including my own? Will this research build towards something else that will meaningfully impact decisions later?”
“Politely disagreeable” seems like a pretty important disposition. You benefit epistemically from being nice and open enough to other people’s ideas that you a) deliberately seek out contrarian opinions and b) don’t reject them outright, but also you need to be disagreeable enough that you in general shouldn’t update on beliefs just because other (smart, respected, experienced, etc) people confidently believe it.
Being very aggressively truth-seeking is a really important disposition. My belief is that most people are by default bad at this, including people who may otherwise make great EA researchers.
Any advice for researchers who want to conduct research similar to Rethink Priorities? or useful resources that you point your researchers towards when they join?
It has been said before elsewhere by Peter, but worth stating again:read and practice Reasoning Transparency . Michael Aird compiled some great resources recently here.
I’d also refer people to Michael and Saulius’ replies to arushigupta’s similar subquestion in last year’s RP AMA.
One thing I’d add is that I think several people at RP and elsewhere would be very excited if someone could:
Find existing resources that work as good training for improving one’s reasoning transparency, and/or
Create such a resource
As far as I’m aware, currently the state of the art is “Suggest people read the post Reasoning Transparency, maybe point them to a couple somewhat related other things (e.g., the compilation I made that Neil links to, or this other compilation I made), hope they absorb it, give them a bunch of feedback when they don’t really (since it’s hard!), hope they absorb that, repeat.” I.e., the state of the art is kinda crappy. (I think Luke’s post is excellent, but just reading it is not generally sufficient for going from not doing the skill well to doing the skill well.)
I don’t know exactly what sort of resources would be best, but I imagine we could do better than what we have now.
Oh, and some other resources I’d often point people towards after they join are:
Giving and receiving feedback (including the top comments)
Countering imposter syndrome and anxiety about work
My collections on how to do high-impact research and get useful input from busy people
For longtermist work, I often point people to Holden Karnofsky’s impressions on career choice, particularly the section on building aptitudes for conceptual and empirical research on core longtermist topics .
I’ve also personally gained a lot from arguing with People Wrong on the Internet, but poor application of this principle may be generally bad for epistemic rigor. In particular, I think it probably helps to have a research blog and be able to do things like spot potential holes in (EA social media, EA forum, research blogs, papers, etc). That said, I think most EA researchers (including my colleagues) are much less Online than I am, so you definitely don’t need to develop an internet argument habit to be a good researcher.
Making lots of falsifiable forecasts about short-term conclusions of your beliefs may be helpful. Calibration training is probably less helpful, but lower cost.
Trying to identify important and tractable (sub)questions is often even more important than the ability to answer them well. In particular, very early on in a research project, try to track “what if I answered this question perfectly? Does it even matter? Will this meaningfully impact anyone’s decisions, including my own? Will this research build towards something else that will meaningfully impact decisions later?”
“Politely disagreeable” seems like a pretty important disposition. You benefit epistemically from being nice and open enough to other people’s ideas that you a) deliberately seek out contrarian opinions and b) don’t reject them outright, but also you need to be disagreeable enough that you in general shouldn’t update on beliefs just because other (smart, respected, experienced, etc) people confidently believe it.
Being very aggressively truth-seeking is a really important disposition. My belief is that most people are by default bad at this, including people who may otherwise make great EA researchers.
I also endorse Neil’s comment.