Great point! I understand the high-level idea behind priors and updating, but I’m not very familiar with the details of Bayes factors and other Bayesian topics. A quick look at Wikipedia didn’t feel super helpful… I’m guessing you don’t mean formally applying the equations, but instead doing it in a more approximate or practical way? I’ve heard Spencer Greenberg’s description of the “Question of Evidence” (how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?). Are there similar quick, practical framings that could be applied for the purposes described in your comment? Do you know of any good, practical resources on Bayesian topics that would be sufficient for what you described?
Good questions! It’s a shame I don’t have good answers. I remember finding Spencer Greenberg’s framing helpful too but I’m not familiar with other useful practical framings, I’m afraid.
I suggested the Bayes’ factor because it seems like a natural choice of the strength/weight of an argument but I don’t find it super easy to reason about usually.
The final suggestion I made will often be easier to do intuitively. You can just to state your prior at the start and then intuitively update it after each argument/consideration, without any maths. I think this is something that you get a bit of a feel for with practice. I would guess that this would usually be better than trying to formally apply Bayes’ rule. (You could then work out your Bayes’ factor as it’s just a function of your prior and posterior but that doesn’t seem especially useful at this point/it seems like too much effort for informal discussions.)
Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.
Sorry for the slow reply. I don’t have a link to any examples I’m afraid but I just mean something like this:
Prior that we should put weights on arguments and considerations: 60%
Pros:
Clarifies the writer’s perspective each of the considerations (65%)
Allows for better discussion for reasons x, y, z… (75%)
Cons:
Takes extra time (70%)
This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.
To see you can find the Bayes’ factors, note that if P(W) is our prior probability that we should give weights, P(¬W)=1−P(W) is our prior that we shouldn’t, and P(W|A1) and P(¬W|A1)=1−P(W|A1) are the posteriors after argument 1, then the Bayes’ factor is
Thanks—this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.
(I had heard about the “Question of Evidence,” which I described above, from looking at a snippet of the podcast’s transcript, but hadn’t actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)
Great point! I understand the high-level idea behind priors and updating, but I’m not very familiar with the details of Bayes factors and other Bayesian topics. A quick look at Wikipedia didn’t feel super helpful… I’m guessing you don’t mean formally applying the equations, but instead doing it in a more approximate or practical way? I’ve heard Spencer Greenberg’s description of the “Question of Evidence” (how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?). Are there similar quick, practical framings that could be applied for the purposes described in your comment? Do you know of any good, practical resources on Bayesian topics that would be sufficient for what you described?
Good questions! It’s a shame I don’t have good answers. I remember finding Spencer Greenberg’s framing helpful too but I’m not familiar with other useful practical framings, I’m afraid.
I suggested the Bayes’ factor because it seems like a natural choice of the strength/weight of an argument but I don’t find it super easy to reason about usually.
The final suggestion I made will often be easier to do intuitively. You can just to state your prior at the start and then intuitively update it after each argument/consideration, without any maths. I think this is something that you get a bit of a feel for with practice. I would guess that this would usually be better than trying to formally apply Bayes’ rule. (You could then work out your Bayes’ factor as it’s just a function of your prior and posterior but that doesn’t seem especially useful at this point/it seems like too much effort for informal discussions.)
Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.
Sorry for the slow reply. I don’t have a link to any examples I’m afraid but I just mean something like this:
This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.
To see you can find the Bayes’ factors, note that if P(W) is our prior probability that we should give weights, P(¬W)=1−P(W) is our prior that we shouldn’t, and P(W|A1) and P(¬W|A1)=1−P(W|A1) are the posteriors after argument 1, then the Bayes’ factor is
P(A1|W)P(A1|¬W)=P(W|A1)P(¬W|A1)P(¬W)P(W)=P(W|A1)1−P(W|A1)1−P(W)P(W)=0.650.350.40.6≈1.24Similarly, the Bayes’ factor for the second pro is 0.750.250.350.65≈1.62.
Sorry for my very slow response!
Thanks—this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.
(I had heard about the “Question of Evidence,” which I described above, from looking at a snippet of the podcast’s transcript, but hadn’t actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)