With existential risk from unaligned AI, I don’t think anyone has ever told a very clear story about how AI will actually get misaligned, get loose, and kill everyone.
This should be evidence against AI x-risk![1] Even in the atmospheric ignition case in Trinity, they had more concrete models to use. If we can’t build a concrete model here, then it implies we don’t have a concrete/​convincing case for why it should be prioritised at all, imo. It’s similar to the point in my footnotes that you need to argue for both p and p->q, not just the latter. This is what I would expect to see if the case for p was unconvincing/​incorrect.
I don’t think this is a problem: we shouldn’t expect to know all the details of how things go wrong in advance
Yeah I agree with this. But the uncertainty and cluelessness in the future should decrease one’s confidence that they’re working on the most important thing in the history of humanity, one would think.
and it is worthwhile to do a lot of preparatory research that might be helpful so that we’re not fumbling through basic things during a critical period. I think the same applies to digital minds.
I’m all in favour of research, but how much should that research get funded? Can it be justified above other potential uses of money and general resource? Should it be an EA priority as defined by the AWDW framing? These we (almost) entirely unargued for.
This should be evidence against AI x-risk![1] Even in the atmospheric ignition case in Trinity, they had more concrete models to use. If we can’t build a concrete model here, then it implies we don’t have a concrete/​convincing case for why it should be prioritised at all, imo. It’s similar to the point in my footnotes that you need to argue for both p and p->q, not just the latter. This is what I would expect to see if the case for p was unconvincing/​incorrect.
Yeah I agree with this. But the uncertainty and cluelessness in the future should decrease one’s confidence that they’re working on the most important thing in the history of humanity, one would think.
I’m all in favour of research, but how much should that research get funded? Can it be justified above other potential uses of money and general resource? Should it be an EA priority as defined by the AWDW framing? These we (almost) entirely unargued for.
Not dispositive evidence perhaps, but a consideration