Thank you, this is helpful!
I just wanted to add BlueDot Impacts “AI Governance Fast Track Course” to the list of AI Governance Courses. It’s a distilled version of their 12-weeks course that I’ve just taken with a background in law. I can highly recommend it.
If you, dear reader of this comment, have any questions about it or BlueDot Impacts “AI Governance 12-weeks Course” which I will take beginning next week, I’m happy to try to provide answers from a participants perspective.
The “maximize expected choiceworthiness” approach has also been called the “expected moral value” (EMV) approach to axiological uncertainty in Greaves/Ord, Moral uncertainty about population axiology (2017).
In their paper (pp. 2-3), they also briefly discuss different approaches to moral uncertainty (just like this article). In addition to the “My Favourite Theory” approach that relates to confidence, they also describe a similar approach where an agent chooses not according to their credences but their all-out believes.