Some questions here are whether 50-50 as precise probabilities to start is reasonable and whether the approach to assign 50-50 as precise probabilities is reasonable.
If, when looking at the scenario, you would have done something like “wow, that’s so complicated and I’m clueless, so 50-50”, then your reaction almost certainly would have been the same if the example originally included one extra eyewitness in favour of one side. But then this tells you your initial way to assign credences was insensitive to this small difference. And yet after the initial assignment, you say it should be sensitive.
Or, if you forgot your initial judgement or the number of eyewitnesses and was just given the total and looked at the situation with fresh eyes, you’d come up with 50-50 again.
Alternatively, you could build a precise probability distribution as a function of the evidence that weighs it all, but this would be very sensitive to arbitrary choices.
I could report 50 % for 68 and 69 eyewitnesses, but this does not necessarily imply I am insensitive to small changes in the number of eyewitnesses. In practice, I would be reporting my best guess rounded to the closest multiple of 0.1 or so. So I believe the reported value being exactly the same would only mean my best guesses differ by less than 10 pp, not that they are exactly the same. I would say the mean of the (rounded) reported best guesses for a given number of eyewitnesses tends to the (precise) underlying best guess as the number of reports increases. If I could hypothetically encounter the question in practically the same situation for 1 M times, I could easily see the mean of my reported values for 68 and 69 eyewitnesses being different.
If I asked you to actually decide who’s more likely to be the culprit, how would you do it?
What do you do if you don’t have reference class information for each part of the problem? How do you weigh the conflicting evidence? I’m imaginging that at many steps, you’d have to rely on direct impressions or numbers that just came to mind.
Would you feel like whatever came out was very arbitrary and depended too much on direct impressions or numbers that just came to mind? Would you actually believe and endorse what came out? Would you defend it to other people?
What I would actually do depends a lot on the situation, but I have a hard time imagining scenarios where it matters whether the probability of Jones having commited the crime is 40 % or 60 %. So I might not even try to decrease the uncertainty about this, and just focus on other considerations. What would maximise the impact of my future donations and work? What information would I have about Jones and Smith? Who would have the greater potential to contribute to a better world? How much time would I have to decide? Would I be accountable in some way for my decision? If so, how would my decision be assessed? What would be the potential consequences of people concluding I made a good or bad decision? How were decisions like mine assessed in the past?
Do you (Michael) see your views about precise and imprecise credences significantly affecting what you would actually do in the real world in a scenario where you had to blame Jones or Smith? Considerations like the ones I mentioned above would matter mode? I may be dodging your question, but I am ultimately interested in making better decisions in the real world. So I think it makes sense to discuss precise and imprecise credences in the context of realistic scenarios.
Do you (Michael) see your views about precise and imprecise credences significantly affecting what you would actually do in the real world in a scenario where you had to blame Jones or Smith?
Probably not. I see it as more illustrative of important cases. Imagine instead it’s between supporting an intervention or not, and it has similar complexity and considerations going in each direction.
More relevant examples to us could be: crops vs nature for wild animals, climate change on wild animals, fishing on wild animals, the far future effects of our actions, the acausal influence of our actions. These are all things I feel clueless enough about to mostly bracket away and ignore when they are side effects of direct interventions I’m interested in supporting. I’m not ignoring them because I think they’re small. I think they are likely much larger than the effects I’m not ignoring.
I may also want to further study some of them, but I’m often not that optimistic about making much progress (especially for far future effrcts and acausal influence) and for that progress to be used in a way that isn’t net negative overall by my lights.
How much more optimistic would you be about research on i) the welfare of soil animals and microorganisms, and ii) comparisons of (expectedhedonistic) welfare across species if you strongly endorsed expectational total hedonistic utilitarianism, moral realism, and precise probabilitites, and ignored acausal effects, and effects after 100 years?
Some questions here are whether 50-50 as precise probabilities to start is reasonable and whether the approach to assign 50-50 as precise probabilities is reasonable.
If, when looking at the scenario, you would have done something like “wow, that’s so complicated and I’m clueless, so 50-50”, then your reaction almost certainly would have been the same if the example originally included one extra eyewitness in favour of one side. But then this tells you your initial way to assign credences was insensitive to this small difference. And yet after the initial assignment, you say it should be sensitive.
Or, if you forgot your initial judgement or the number of eyewitnesses and was just given the total and looked at the situation with fresh eyes, you’d come up with 50-50 again.
Alternatively, you could build a precise probability distribution as a function of the evidence that weighs it all, but this would be very sensitive to arbitrary choices.
I could report 50 % for 68 and 69 eyewitnesses, but this does not necessarily imply I am insensitive to small changes in the number of eyewitnesses. In practice, I would be reporting my best guess rounded to the closest multiple of 0.1 or so. So I believe the reported value being exactly the same would only mean my best guesses differ by less than 10 pp, not that they are exactly the same. I would say the mean of the (rounded) reported best guesses for a given number of eyewitnesses tends to the (precise) underlying best guess as the number of reports increases. If I could hypothetically encounter the question in practically the same situation for 1 M times, I could easily see the mean of my reported values for 68 and 69 eyewitnesses being different.
If I asked you to actually decide who’s more likely to be the culprit, how would you do it?
What do you do if you don’t have reference class information for each part of the problem? How do you weigh the conflicting evidence? I’m imaginging that at many steps, you’d have to rely on direct impressions or numbers that just came to mind.
Would you feel like whatever came out was very arbitrary and depended too much on direct impressions or numbers that just came to mind? Would you actually believe and endorse what came out? Would you defend it to other people?
What I would actually do depends a lot on the situation, but I have a hard time imagining scenarios where it matters whether the probability of Jones having commited the crime is 40 % or 60 %. So I might not even try to decrease the uncertainty about this, and just focus on other considerations. What would maximise the impact of my future donations and work? What information would I have about Jones and Smith? Who would have the greater potential to contribute to a better world? How much time would I have to decide? Would I be accountable in some way for my decision? If so, how would my decision be assessed? What would be the potential consequences of people concluding I made a good or bad decision? How were decisions like mine assessed in the past?
Do you (Michael) see your views about precise and imprecise credences significantly affecting what you would actually do in the real world in a scenario where you had to blame Jones or Smith? Considerations like the ones I mentioned above would matter mode? I may be dodging your question, but I am ultimately interested in making better decisions in the real world. So I think it makes sense to discuss precise and imprecise credences in the context of realistic scenarios.
Probably not. I see it as more illustrative of important cases. Imagine instead it’s between supporting an intervention or not, and it has similar complexity and considerations going in each direction.
More relevant examples to us could be: crops vs nature for wild animals, climate change on wild animals, fishing on wild animals, the far future effects of our actions, the acausal influence of our actions. These are all things I feel clueless enough about to mostly bracket away and ignore when they are side effects of direct interventions I’m interested in supporting. I’m not ignoring them because I think they’re small. I think they are likely much larger than the effects I’m not ignoring.
I may also want to further study some of them, but I’m often not that optimistic about making much progress (especially for far future effrcts and acausal influence) and for that progress to be used in a way that isn’t net negative overall by my lights.
How much more optimistic would you be about research on i) the welfare of soil animals and microorganisms, and ii) comparisons of (expected hedonistic) welfare across species if you strongly endorsed expectational total hedonistic utilitarianism, moral realism, and precise probabilitites, and ignored acausal effects, and effects after 100 years?