I think it would be a good idea to be more explicit that other considerations besides those from (i) can inform how we do (ii), since otherwise we’re committed to consequentialism.
Also, I’m being a bit pedantic, but if the maximizing course(s) of action are ruled out for nonconsequentialist reasons, then since (i) only cares about maximization, we won’t necessarily have information ranking the options that aren’t (near) maximal (we might devote most of our research to decisions we suspect might be maximal, to the neglect of others), and (ii) won’t necessarily be informed by the value of outcomes.
I think it would be a good idea to be more explicit that other considerations besides those from (i) can inform how we do (ii), since otherwise we’re committed to consequentialism.
Also, I’m being a bit pedantic, but if the maximizing course(s) of action are ruled out for nonconsequentialist reasons, then since (i) only cares about maximization, we won’t necessarily have information ranking the options that aren’t (near) maximal (we might devote most of our research to decisions we suspect might be maximal, to the neglect of others), and (ii) won’t necessarily be informed by the value of outcomes.