I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.
If EA is going to do some lesson-taking, I would not want this point to be neglected.
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
I previously addressed this here.
Thanks. I think Cowen’s point is a mix of your (a) & (b).
I think this mixture is concerning and should prompt reflection about some foundational issues.