Ambiguous statements are bad, 100%, but so are clear, baseless statements.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn’t necessarily describe a problematic phenomenon. (See Greg Lewis’s recent post; I’m not sure if you disagree.). The latter claim would be very worrying if true, but I don’t see reason to believe that it is. Sure, EAs sometimes lack good reasons for the views they espouse, but this is a general phenomenon unrelated to the practice of reporting credences explicitly.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.
Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment “There’s also a lot of pseudo-superforcasting … without any evidence backing up those credences.” I didn’t say “without stating any evidence backing up those credences.” This is not a guess on my part. I’ve seen comments where they say explicitly that the credence they’re giving is a first impression, and not something well thought out. It’s fine for them to have a credence, but why should anyone care what your credence is if it’s just a first impression?
See Greg Lewis’s recent post; I’m not sure if you disagree.
I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.
But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
Yes, I think it’s a lot worse. Consider the two statements:
I haven’t thought much about it, but I’m pretty sure (99.99%) based on a cursory read that human extinction from climate change won’t happen.
And
I haven’t thought much about it, but I’m pretty sure (80%) based on a cursory read that human extinction from climate change won’t happen.
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what “pretty sure” means in common language), but ought to have drastically different implications for behavior!
I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them. Otherwise, it’s difficult to know if you were “really” wrong, even after checking hundreds of claims!
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what “pretty sure” means in common language), but ought to have drastically different implications for behavior!
Yes you’re right. But I’m making a distinction between people’s own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says “I haven’t thought much about it”, it should be an indicator to not update your own credence by very much at all.
I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them
I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they’re 90% confident without providing any reasoning that justifies their position. All I’m asking for is for people to provide a good argument along with their credence.
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.
I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they’re more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.
I’m curious if you agree or disagree with this claim:
The median EA is closer to a typical superforecaster than they are to random people
With a specific operationalization like:
If asked to predict on 50 random questions on Good Judgement Open, the median commenter on the EA Forum would have a Brier score closer to a typical superforecaster than the median commenter’s score to a randomly selected English-speaking person.
It’s almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get “double counted” (and there’s “flow on” effects where the first person who updates another person’s credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It’s a mistake for you to then say “I’ll update my credence a few percent because FCCC might have other evidence.” For this reason, providing supporting arguments is a net benefit, irrespective of EA’s accuracy of forecasts.
I don’t find your arguments persuasive for why people should give reasoning in addition to credences. I think posting reasoning is on the margin of net value, and I wish more people did it, but I also acknowledge that people’s time is expensive so I understand why they choose not to. You list reasons why giving reasoning is beneficial, but not reasons for why it’s sufficient to justify the cost.
My question probing predictive ability of EAs earlier was an attempt to set right what I consider to be an inaccuracy in the internal impressions EAs have about the ability of superforecasters. In particular, it’s not obvious to me that we should trust the judgments of superforecasters substantially more than we trust the judgments of other EAs.
My view is that giving explicit, quantitative credences plus stating the supporting evidence is typically better than giving explicit, quantitative credences without stating the supporting evidence (at least if we ignore time costs, information hazards, etc.), which is in turn typically better than giving qualitative probability statements (e.g., “pretty sure”) without stating the supporting evidence, and often better than just saying nothing.
Does this match your view?
In other words, are you essentially just arguing that “providing supporting arguments is a net benefit”?
I ask because I had the impression that you were arguing that it’s bad for people to give explicit, quantitative credences if they aren’t also giving their supporting evidence (and that it’d be better for them to, in such cases, either use qualitative statements or just say nothing). Upon re-reading the thread, I got the sense that others may have gotten that impression too, but also I don’t see you explicitly make that argument.
But I do think it’s a mistake to update your credence based off someone else’s credence without knowing their argument and without knowing whether they’re calibrated. We typically don’t know the latter, so I don’t know why people are giving credences without supporting arguments. It’s fine to have a credence without evidence, but why are people publicising such credences?
I do think it’s a mistake to update your credence based off someone else’s credence without knowing their argument and without knowing whether they’re calibrated.
I’d agree with a modified version of your claim, along the following lines: “You should update more based on someone’s credence if you have more reason to believe their credence will track the truth, e.g. by knowing they’ve got good evidence (even if you haven’t actually seen the evidence) or knowing they’re well-calibrated. There’ll be some cases where you have so little reason to believe their credence will track the truth that, for practical purposes, it’s essentiallynot worth updating.”
But your claim at least sounds like it’s instead that some people are calibrated while others aren’t (a binary distinction), and when people aren’t calibrated, you really shouldn’t update based on their credences at all (at least if you haven’t seen their arguments).
I think calibration increases in a quantitative, continuous way, rather than switching from off to on. So I think we should just update on credences more the more calibrated the person they’re from is.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn’t necessarily describe a problematic phenomenon. (See Greg Lewis’s recent post; I’m not sure if you disagree.). The latter claim would be very worrying if true, but I don’t see reason to believe that it is. Sure, EAs sometimes lack good reasons for the views they espouse, but this is a general phenomenon unrelated to the practice of reporting credences explicitly.
Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment “There’s also a lot of pseudo-superforcasting … without any evidence backing up those credences.” I didn’t say “without stating any evidence backing up those credences.” This is not a guess on my part. I’ve seen comments where they say explicitly that the credence they’re giving is a first impression, and not something well thought out. It’s fine for them to have a credence, but why should anyone care what your credence is if it’s just a first impression?
I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.
Yes, I think it’s a lot worse. Consider the two statements:
And
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what “pretty sure” means in common language), but ought to have drastically different implications for behavior!
I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them. Otherwise, it’s difficult to know if you were “really” wrong, even after checking hundreds of claims!
Yes you’re right. But I’m making a distinction between people’s own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says “I haven’t thought much about it”, it should be an indicator to not update your own credence by very much at all.
I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they’re 90% confident without providing any reasoning that justifies their position. All I’m asking for is for people to provide a good argument along with their credence.
I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they’re more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.
I’m curious if you agree or disagree with this claim:
With a specific operationalization like:
It’s almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get “double counted” (and there’s “flow on” effects where the first person who updates another person’s credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It’s a mistake for you to then say “I’ll update my credence a few percent because FCCC might have other evidence.” For this reason, providing supporting arguments is a net benefit, irrespective of EA’s accuracy of forecasts.
I don’t find your arguments persuasive for why people should give reasoning in addition to credences. I think posting reasoning is on the margin of net value, and I wish more people did it, but I also acknowledge that people’s time is expensive so I understand why they choose not to. You list reasons why giving reasoning is beneficial, but not reasons for why it’s sufficient to justify the cost.
My question probing predictive ability of EAs earlier was an attempt to set right what I consider to be an inaccuracy in the internal impressions EAs have about the ability of superforecasters. In particular, it’s not obvious to me that we should trust the judgments of superforecasters substantially more than we trust the judgments of other EAs.
My view is that giving explicit, quantitative credences plus stating the supporting evidence is typically better than giving explicit, quantitative credences without stating the supporting evidence (at least if we ignore time costs, information hazards, etc.), which is in turn typically better than giving qualitative probability statements (e.g., “pretty sure”) without stating the supporting evidence, and often better than just saying nothing.
Does this match your view?
In other words, are you essentially just arguing that “providing supporting arguments is a net benefit”?
I ask because I had the impression that you were arguing that it’s bad for people to give explicit, quantitative credences if they aren’t also giving their supporting evidence (and that it’d be better for them to, in such cases, either use qualitative statements or just say nothing). Upon re-reading the thread, I got the sense that others may have gotten that impression too, but also I don’t see you explicitly make that argument.
Basically, yeah.
But I do think it’s a mistake to update your credence based off someone else’s credence without knowing their argument and without knowing whether they’re calibrated. We typically don’t know the latter, so I don’t know why people are giving credences without supporting arguments. It’s fine to have a credence without evidence, but why are people publicising such credences?
I’d agree with a modified version of your claim, along the following lines: “You should update more based on someone’s credence if you have more reason to believe their credence will track the truth, e.g. by knowing they’ve got good evidence (even if you haven’t actually seen the evidence) or knowing they’re well-calibrated. There’ll be some cases where you have so little reason to believe their credence will track the truth that, for practical purposes, it’s essentially not worth updating.”
But your claim at least sounds like it’s instead that some people are calibrated while others aren’t (a binary distinction), and when people aren’t calibrated, you really shouldn’t update based on their credences at all (at least if you haven’t seen their arguments).
I think calibration increases in a quantitative, continuous way, rather than switching from off to on. So I think we should just update on credences more the more calibrated the person they’re from is.
Does that sound right to you?