Also, when reading Greaves and Mogensen’s papers, I was reminded of the ideas of cluster thinking (also here) and model combination. I could be drawing faulty analogies, but it seemed like those ideas could be ways to capture, in a form that can actually be readily worked with, the following idea (from Greaves; the same basic concept is also used in Mogensen):
in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’)
That is, we can consider each probability function in the agent’s representor as one model, and then either qualitatively use Holden’s idea of cluster thinking, or get a weighted combination of those models. Then we’d actually have an answer, rather than just indifference.
This seems like potentially “the best of both worlds”; i.e., a way to capture both of the following intuitively appealing ideas:
perhaps we shouldn’t present singular, sharp credence functions over extremely hard-to-predict long-term effects
we can still make educated guesses like “avoiding extinction is probably bad in expectation” and (perhaps) “giving to AMF is probably good in expectation”.
(This second intuition can rest on ideas like “Yeah, ok, I agree that it’s ‘unclear’ how to weigh up these arguments, but I weigh up arguments when it’s unclear how to do so all the time. I’m still at least slightly more convinced by argument X, so I’m going to go with what it suggests, and just also remain extremely open to new evidence.”)
Also, when reading Greaves and Mogensen’s papers, I was reminded of the ideas of cluster thinking (also here) and model combination. I could be drawing faulty analogies, but it seemed like those ideas could be ways to capture, in a form that can actually be readily worked with, the following idea (from Greaves; the same basic concept is also used in Mogensen):
That is, we can consider each probability function in the agent’s representor as one model, and then either qualitatively use Holden’s idea of cluster thinking, or get a weighted combination of those models. Then we’d actually have an answer, rather than just indifference.
This seems like potentially “the best of both worlds”; i.e., a way to capture both of the following intuitively appealing ideas:
perhaps we shouldn’t present singular, sharp credence functions over extremely hard-to-predict long-term effects
we can still make educated guesses like “avoiding extinction is probably bad in expectation” and (perhaps) “giving to AMF is probably good in expectation”.
(This second intuition can rest on ideas like “Yeah, ok, I agree that it’s ‘unclear’ how to weigh up these arguments, but I weigh up arguments when it’s unclear how to do so all the time. I’m still at least slightly more convinced by argument X, so I’m going to go with what it suggests, and just also remain extremely open to new evidence.”)