Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.
Sorry for the slow reply. I don’t have a link to any examples I’m afraid but I just mean something like this:
Prior that we should put weights on arguments and considerations: 60%
Pros:
Clarifies the writer’s perspective each of the considerations (65%)
Allows for better discussion for reasons x, y, z… (75%)
Cons:
Takes extra time (70%)
This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.
To see you can find the Bayes’ factors, note that if P(W) is our prior probability that we should give weights, P(¬W)=1−P(W) is our prior that we shouldn’t, and P(W|A1) and P(¬W|A1)=1−P(W|A1) are the posteriors after argument 1, then the Bayes’ factor is
Thanks—this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.
(I had heard about the “Question of Evidence,” which I described above, from looking at a snippet of the podcast’s transcript, but hadn’t actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)
Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.
Sorry for the slow reply. I don’t have a link to any examples I’m afraid but I just mean something like this:
This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.
To see you can find the Bayes’ factors, note that if P(W) is our prior probability that we should give weights, P(¬W)=1−P(W) is our prior that we shouldn’t, and P(W|A1) and P(¬W|A1)=1−P(W|A1) are the posteriors after argument 1, then the Bayes’ factor is
P(A1|W)P(A1|¬W)=P(W|A1)P(¬W|A1)P(¬W)P(W)=P(W|A1)1−P(W|A1)1−P(W)P(W)=0.650.350.40.6≈1.24Similarly, the Bayes’ factor for the second pro is 0.750.250.350.65≈1.62.
Sorry for my very slow response!
Thanks—this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.
(I had heard about the “Question of Evidence,” which I described above, from looking at a snippet of the podcast’s transcript, but hadn’t actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)