Before I can or should try to write up that take, I need to fact-check one of my take-central beliefs about how the last couple of decades have gone down. My belief is that the Open Philanthropy Project, EA generally, and Oxford EA particularly, had bad AI timelines and bad ASI ruin conditional probabilities; and that these invalidly arrived-at beliefs were in control of funding, and were explicitly publicly promoted at the expense of saner beliefs.
We don’t know if AGI timelines or ASI ruin conditional probabilities are “bad”, because neither event has happened yet. If you want to ask what openphils probabilities are and if they disagree with your own, you should just ask that directly. My impression is that there is a wide range of views on both questions among EA org leadership.
We don’t know if AGI timelines or ASI ruin conditional probabilities are “bad”, because neither event has happened yet. If you want to ask what openphils probabilities are and if they disagree with your own, you should just ask that directly. My impression is that there is a wide range of views on both questions among EA org leadership.