We’ve thought about it a lot, but that doesn’t mean we got anything worthwhile? It’s like saying that literal doom prophets are the best group to defer to about when the world would end, because they’ve spent the most time thinking about it.
I think maybe about 1% of publicly available EA thought about AI isn’t just science fiction. Maybe less. I’m much more worried about catastrophic AI risk than ‘normal people’ are, but I don’t think we’ve made convincing arguments about how those will happen, why, and how to tackle them.
We’ve thought about it a lot, but that doesn’t mean we got anything worthwhile? It’s like saying that literal doom prophets are the best group to defer to about when the world would end, because they’ve spent the most time thinking about it.
I think maybe about 1% of publicly available EA thought about AI isn’t just science fiction. Maybe less. I’m much more worried about catastrophic AI risk than ‘normal people’ are, but I don’t think we’ve made convincing arguments about how those will happen, why, and how to tackle them.