Have you thought about contacting anyone at the Good Judgement Project? This seems like it would be a pretty useful tool for people in forecasting tournaments.
Ben_Rhodes
Great post! I think the WALY / QALY distinction—and how to explain it carefully—should be taught to anyone running a giving game. As someone who organised a game recently, I found it tricky to navigate the trade-off between emphasising the basics ( “QALYs are a really important metric!”) and the caveats (“It’s only one of many important metrics”). This article by NICE helped me think the issue through.
Thanks for sharing! :) “statistical empathy” will now be a permanent part of my vocabulary. I would love to see people sharing “statistics that made me cry”. Here’s mine.
Sorry for sounding so negative! I should have said that I thought the post was well-written overall and made many good points about how we can learn from other movements. However, I still think the specific quotes used were unhelpful.
I think I would have been considerably more amenable to your post if you had crafted subtler quotes that could more plausibly be attributed to an EA. As it stands, it feels like straw (wo)manning.
Presumably, these would be interviews of major EA players?
A possible, complementary, interview series might look something like this. This is a interview series my girlfriend produced (unrelated to EA) interviewing people using photography. You click on each picture and hover over it to see their responses to interview questions. Would it be a good idea to interview ‘ordinary’ EA’s through this method? I ask because she would probably do one if she thought there was sufficient interest.
Put yet another way: overcrowdedness is a significant concern. Perhaps you assign it a higher weighting within the ‘overcrowdedness/ importance / tractability’ tripartite than the average EA. If so, why not trade it off for the latter two: you could examine only moderately important careers—ones that receive little/no EA attention—where the average employee is much less talented than you. Or you could dedicate yourself to solving a seemingly intractable problem—it’s high risk but that’s precisely why it might be overlooked as Romeo points out.
Of course, if you think replaceability issues are truly ubiquitous, then even these suggestions are moot.
I am afraid it’s slightly outdated (2008) and you can’t read the entirety of it online but a paper in Bostrom’s ‘Global Catastrophic Risks’ discusses this: ‘The Continuing Threat of Nuclear War’
This framed a debate I was having with myself in very precise terms, thank you!
If you are interested in applying to Oxford, but not FHI, then Michael Osbourne is very sympathetic to AI safety, but doesn’t currently work on it. He might be worth chatting to. Also, Shimon Whiteson does lots of relevant seeming work in the area of deep RL, but I don’t know if he is at all sympathetic.