Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/ācluelessness and backfire risks, s-risks, and indirect effects on wild animals.
Iāve also done economic modelling for some animal welfare issues.
Want to leave anonymous feedback for me, positive, constructive or negative? https://āāwww.admonymous.co/āāmichael-st-jules
Here are a few things you might need to address to convince a skeptic:
Humans currently have access to, maintain and can shut down or destroy the hardware and infrastructure AI depends on. This is an important advantage.
Ending us all can be risky from an AIās perspective, because of the risk of shutdown (or losing humans to maintain, extract resource for, and build infrastructure AI depends on without an adequate replacement).
Iād guess we can make AIs risk-averse (or difference-making risk averse) for whatever goals they do end up with, even if we canāt align them.
Ending us all sounds hard and unlikely. There are many ways we are resilient and ways governments and militaries could respond to a threat of this level.