What does he think about stochastic dominance as an alternative decision theory? Are there any other decision theories he likes?
What are his thoughts about the importance and implications of the possibility of aliens with respect to existential risks, including both extinction risks and s-risks? What about grabby aliens in particular? Should we expect to be replaced (or have our descendants replaced) with aliens eventually anyway? Should we worry about conflicts with aliens leading to s-risks?
If the correct normative view is impartial, is (Bayesian) expected value maximization too agent-centered, like ambiguity aversion with respect to the difference one makes (the latter is discussed in The case for strong longtermism)? Basically, a Bayesian uses their own single joint probability distribution, without good justification for choosing their own over many others. One alternative would be to use something like the maximality rule, where multiple probability distributions are all checked, without committing to a fairly arbitrarily chosen single one.
What is his position on EDT vs CDT and other alternatives? What are the main practical implications?
For moral uncertainty, in what (important) cases does he think intertheoretic comparisons are justified (and not arbitrary, i.e. alternative normalizations with vastly different implications aren’t as justifiable)?
What are his meta-ethical views? Is he a moral realist or antirealist? What kind? What are the main practical implications?
What are his thoughts on person-affecting views and their implications with respect to longtermism, including asymmetric ones, especially Teruji Thomas’s The Asymmetry, Uncertainty, and the Long Term?
How much does longtermism depend on expected value maximization, especially maximizing a utility function that’s additive over moral patients?
What are the best arguments for and against expected value maximization as normatively required?
What does he think about the vulnerability to Dutch books and money pumps and violating the sure-thing principle with expected value maximization with unbounded (including additive) utility functions? See, e.g. Paul Christiano’s comment with St. Petersburg lotteries.
What does he think about stochastic dominance as an alternative decision theory? Are there any other decision theories he likes?
What are his thoughts about the importance and implications of the possibility of aliens with respect to existential risks, including both extinction risks and s-risks? What about grabby aliens in particular? Should we expect to be replaced (or have our descendants replaced) with aliens eventually anyway? Should we worry about conflicts with aliens leading to s-risks?
If the correct normative view is impartial, is (Bayesian) expected value maximization too agent-centered, like ambiguity aversion with respect to the difference one makes (the latter is discussed in The case for strong longtermism)? Basically, a Bayesian uses their own single joint probability distribution, without good justification for choosing their own over many others. One alternative would be to use something like the maximality rule, where multiple probability distributions are all checked, without committing to a fairly arbitrarily chosen single one.
What is his position on EDT vs CDT and other alternatives? What are the main practical implications?
For moral uncertainty, in what (important) cases does he think intertheoretic comparisons are justified (and not arbitrary, i.e. alternative normalizations with vastly different implications aren’t as justifiable)?
What are his meta-ethical views? Is he a moral realist or antirealist? What kind? What are the main practical implications?