Near-term AI ethics is the branch of AI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. Long-term AI ethics, by contrast, is the branch of AI ethics that studies the moral questions arising from issues expected to arise when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Further reading
Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.
Related entries
AI alignment | AI governance | ethics of artificial intelligence
- ^
Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.
- ^
Brundage, Miles (2017) Guide to working in AI policy and strategy, 80,000 Hours, June 7.
I don’t quite understand what is here meant by “near-term AI ethics”. Is it something like “the ethical issues posed by AI when only its effects on the present population are taken into account”?
If you look at “Beyond near-and long-term: Towards a clearer account of research priorities in AI ethics and society”, you get the following description:
Great, thanks. I expanded the entry.