If I understand you correctly, what you’re proposing is essentially a subset of classical decision theory with bounded utility functions. Recall that, under classical decision theory, we choose our action
according to
maxa∈AE[u(a,X)],
where X is a random state of nature and A an action
space.
Suppose there are N (infinitely many works too) moral theories s1,s2,…,sN,
each with probability p(si) and associated utility ui.
Then we can define
u(a,X)=N∑i=1p(si)ui(a,X).
This step gives us (moral) uncertainty in our utility function.
Then, as far as I understand you, you want to define some component
utility functions as
ui(a,X)={1,if (a,X) is acceptable under theory si,0,if (a,X) is unacceptable under theory si.
As then 0≤Eui(a,X)≤1 is the probability of an acceptable
outcome under si. And since we’re taking the expected value of these bounded component utilities to construct u, we’re in classical bounded utility function land.
That said, I believe that
This post would benefit from a rewrite of the paragraph starting with “Success maximization is a mechanism by which to generalize maxipok”. It states ” Let ai be an action i from the set of m actions A=a1,a2,…,am. ” Is i and action, ai and action, or both? I also don’t understand what π is. Are there states of nature in this framework? You say that s is a moral theory, so it cannot be s?
You should add concrete examples. If you add one or two it might become easier to understand what you’re doing despite the formal definition not being 100% clear.
If I understand you correctly, what you’re proposing is essentially a subset of classical decision theory with bounded utility functions. Recall that, under classical decision theory, we choose our action according to maxa∈AE[u(a,X)], where X is a random state of nature and A an action space.
Suppose there are N (infinitely many works too) moral theories s1,s2,…,sN, each with probability p(si) and associated utility ui. Then we can define u(a,X)=N∑i=1p(si)ui(a,X). This step gives us (moral) uncertainty in our utility function.
Then, as far as I understand you, you want to define some component utility functions as ui(a,X)={1,if (a,X) is acceptable under theory si,0,if (a,X) is unacceptable under theory si. As then 0≤Eui(a,X)≤1 is the probability of an acceptable outcome under si. And since we’re taking the expected value of these bounded component utilities to construct u, we’re in classical bounded utility function land.
That said, I believe that
This post would benefit from a rewrite of the paragraph starting with “Success maximization is a mechanism by which to generalize maxipok”. It states ” Let ai be an action i from the set of m actions A=a1,a2,…,am. ” Is i and action, ai and action, or both? I also don’t understand what π is. Are there states of nature in this framework? You say that s is a moral theory, so it cannot be s?
You should add concrete examples. If you add one or two it might become easier to understand what you’re doing despite the formal definition not being 100% clear.