Oh, I meant pessimistic. A reason for a weak update might similar to Gell-Man amnesia effect. After putting effort into classical arguments you noticed some important flaws. The fact that have not been articulated before suggests that collective EA epistemology is weaker than expected. Because of that one might get less certain about quality of arguments in other EA domains.
So, in short, the Gell-Mann Amnesia effect is when experts forget how badly their own subject is treated in media and believe that subjects they don’t know much about are treated more competently by the same media.
I’d say nearly everyone’s ability to determine an argument’s strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as “people make logic mistakes so you might have too”, rather than actually identifying the weaknesses in an argument. There’s also a lot of pseudo-superforcasting, like “I have 80% confidence in this”, without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually understanding how they work. Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.), but outside of that, I’d say we’re just as wrong as anyone else.
*Some meta-arguments are valid, like discussions on logical grounding of particular methodologies, e.g. “Falsification works because of the law of contraposition, which follows from the definition of logical implication”.
There’s also a lot of pseudo-superforcasting, like “I have 80% confidence in this”, without any evidence backing up those credences.
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don’t provide additional evidence, if only to avoid problems of ambiguous language.
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences
I’m not sure I know what you mean by this.
I’d agree that you’re definitely not obligated to provide more evidence, and that your credence does fully capture how likely you think it is that X will happen.
But it seems to me that the evidence that informed your credence can also be very useful information for people, both in relation to how much they should update their own credences (as they may have info you lack regarding how relevant and valid those pieces of evidence are), and in relation to how—and how much—you might update your views (e.g., if they find out you just thought for 5 seconds and went with your gut, vs spending a year building expertise and explicit models). It also seems like sharing that evidence could help them with things like building their general models of the world or of how to make estimates.
(This isn’t an argument against giving explicit probabilities that aren’t based on much or that aren’t accompanied by explanations of what they’re based on. I’m generally, though tentatively, in favour of that. It just seems like also explaining what the probabilities are based on is often quite useful.)
(By the way, Beard et al. discuss related matters in the context of existential risk estimates, using the term “evidential reasoning”.)
This is in contrast to a frequentist perspective, or maybe something close to a “common-sense” perspective, which tends to bucket knowledge into separate categories that aren’t easily interchangeable.
Many people make a mental separation between “thinking something is true” and “thinking something is X% likely, where X is high”, with one falling into the category of lived experience, and the other falling into the category of “scientific or probabilistic assessment”. The first one doesn’t require any externalizable evidence and is a fact about the mind, the second is part of a collaborative scientific process that has at its core repeatable experiments, or at least recurring frequencies (i.e. see the frequentist discussion of it being meaningless to assign probabilities to one-time events).
Under some of these other non-bayesian interpretations of probability theory, an assignment of probabilities is not valid if you don’t associate it with either an experimental setup, or some recurring frequency. So under those interpretations you do have an additional obligation to provide evidence and context to your probability estimates, since otherwise they don’t really form even a locally valid statement.
Thanks for that answer. So just to check, you essentially just meant that it’s ok to provide credences without saying your evidence—i.e., you’re not obligated to provide evidence when you provide credences? Not that there’s no added value to providing your evidence alongside your credences?
If so, I definitely agree.
(And it’s not that your original statement seemed to clearly say something different, just that I wasn’t sure that that’s all it was meant to mean.)
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences
Sure there is: By communicating, we’re trying to update one another’s credences. You’re not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone’s credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there’s no need for supporting evidence.
Ambiguous statements are bad, 100%, but so are clear, baseless statements.
As you say, people can legitimately have credences about anything. It’s how people should think. But if you’re going to post your credence, provide some evidence so that you can update other people’s credences too.
Ambiguous statements are bad, 100%, but so are clear, baseless statements.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn’t necessarily describe a problematic phenomenon. (See Greg Lewis’s recent post; I’m not sure if you disagree.). The latter claim would be very worrying if true, but I don’t see reason to believe that it is. Sure, EAs sometimes lack good reasons for the views they espouse, but this is a general phenomenon unrelated to the practice of reporting credences explicitly.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.
Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment “There’s also a lot of pseudo-superforcasting … without any evidence backing up those credences.” I didn’t say “without stating any evidence backing up those credences.” This is not a guess on my part. I’ve seen comments where they say explicitly that the credence they’re giving is a first impression, and not something well thought out. It’s fine for them to have a credence, but why should anyone care what your credence is if it’s just a first impression?
See Greg Lewis’s recent post; I’m not sure if you disagree.
I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.
But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
Yes, I think it’s a lot worse. Consider the two statements:
I haven’t thought much about it, but I’m pretty sure (99.99%) based on a cursory read that human extinction from climate change won’t happen.
And
I haven’t thought much about it, but I’m pretty sure (80%) based on a cursory read that human extinction from climate change won’t happen.
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what “pretty sure” means in common language), but ought to have drastically different implications for behavior!
I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them. Otherwise, it’s difficult to know if you were “really” wrong, even after checking hundreds of claims!
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what “pretty sure” means in common language), but ought to have drastically different implications for behavior!
Yes you’re right. But I’m making a distinction between people’s own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says “I haven’t thought much about it”, it should be an indicator to not update your own credence by very much at all.
I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them
I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they’re 90% confident without providing any reasoning that justifies their position. All I’m asking for is for people to provide a good argument along with their credence.
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.
I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they’re more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.
I’m curious if you agree or disagree with this claim:
The median EA is closer to a typical superforecaster than they are to random people
With a specific operationalization like:
If asked to predict on 50 random questions on Good Judgement Open, the median commenter on the EA Forum would have a Brier score closer to a typical superforecaster than the median commenter’s score to a randomly selected English-speaking person.
It’s almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get “double counted” (and there’s “flow on” effects where the first person who updates another person’s credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It’s a mistake for you to then say “I’ll update my credence a few percent because FCCC might have other evidence.” For this reason, providing supporting arguments is a net benefit, irrespective of EA’s accuracy of forecasts.
I don’t find your arguments persuasive for why people should give reasoning in addition to credences. I think posting reasoning is on the margin of net value, and I wish more people did it, but I also acknowledge that people’s time is expensive so I understand why they choose not to. You list reasons why giving reasoning is beneficial, but not reasons for why it’s sufficient to justify the cost.
My question probing predictive ability of EAs earlier was an attempt to set right what I consider to be an inaccuracy in the internal impressions EAs have about the ability of superforecasters. In particular, it’s not obvious to me that we should trust the judgments of superforecasters substantially more than we trust the judgments of other EAs.
My view is that giving explicit, quantitative credences plus stating the supporting evidence is typically better than giving explicit, quantitative credences without stating the supporting evidence (at least if we ignore time costs, information hazards, etc.), which is in turn typically better than giving qualitative probability statements (e.g., “pretty sure”) without stating the supporting evidence, and often better than just saying nothing.
Does this match your view?
In other words, are you essentially just arguing that “providing supporting arguments is a net benefit”?
I ask because I had the impression that you were arguing that it’s bad for people to give explicit, quantitative credences if they aren’t also giving their supporting evidence (and that it’d be better for them to, in such cases, either use qualitative statements or just say nothing). Upon re-reading the thread, I got the sense that others may have gotten that impression too, but also I don’t see you explicitly make that argument.
But I do think it’s a mistake to update your credence based off someone else’s credence without knowing their argument and without knowing whether they’re calibrated. We typically don’t know the latter, so I don’t know why people are giving credences without supporting arguments. It’s fine to have a credence without evidence, but why are people publicising such credences?
I do think it’s a mistake to update your credence based off someone else’s credence without knowing their argument and without knowing whether they’re calibrated.
I’d agree with a modified version of your claim, along the following lines: “You should update more based on someone’s credence if you have more reason to believe their credence will track the truth, e.g. by knowing they’ve got good evidence (even if you haven’t actually seen the evidence) or knowing they’re well-calibrated. There’ll be some cases where you have so little reason to believe their credence will track the truth that, for practical purposes, it’s essentiallynot worth updating.”
But your claim at least sounds like it’s instead that some people are calibrated while others aren’t (a binary distinction), and when people aren’t calibrated, you really shouldn’t update based on their credences at all (at least if you haven’t seen their arguments).
I think calibration increases in a quantitative, continuous way, rather than switching from off to on. So I think we should just update on credences more the more calibrated the person they’re from is.
I mean, very frequently it’s useful to just know what someone’s credence is. That’s often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence. This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.
I mean, very frequently it’s useful to just know what someone’s credence is. That’s often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence.
I agree, but only if they’re a reliable forecaster. A superforecaster’s credence can shift my credence significantly. It’s possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it’s not practical for them to provide all the evidence, and you are right.
But most people are poor forecasters (and sometimes they explicitly state they have no supporting evidence other than their intuition), so I see no reason to update my credence just because someone I don’t know is confident. If the credence of a random person has any value to my own credence, it’s very low.
This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.
That would depend on the question. Sometimes we’re interested in feelings for their own sake. That’s perfectly legitimate because the actual evidence we’re wanting is the data about their feelings. But if someone’s giving their feelings about whether there are an infinite number of primes, it doesn’t update my credences at all.
I think opinions without any supporting argument worsen discourse. Imagine a group of people thoughtfully discussing evidence, then someone comes in, states their feelings without any evidence, and then leaves. That shouldn’t be taken seriously. Increasing the proportion of those people only makes it worse.
Bayesians should want higher-quality evidence. Isn’t self-reported data is unreliable? And that’s when the person was there when the event happened. So what is the reference class for people providing opinions without having evidence? It’s almost certainly even more unreliable. If someone has an argument for their credence, they should usually give that argument; if they don’t have an argument, I’m not sure why they’re adding to the conversation.
I’m not saying we need to provide peer-reviewed articles. I just want to see some line of reasoning demonstrating why you came to the conclusion you made, so that everyone can examine your assumptions and inferences. If we have different credences and the set of things I’ve considered is a strict subset of yours, you might update your credence because you mistakenly think I’ve considered something you haven’t.
Yes, but unreliability does not mean that you instead just use vague words instead of explicit credences. It’s a fine critique to say that people make too many arguments without giving evidence (something I also disagree with, but that isn’t the subject of this thread), but you are concretely making the point that it’s additionally bad for them to give explicit credences! But the credences only help, compared to vague and ambiguous terms that people would use instead.
I’m not sure how you think that’s what I said. Here’s what I actually said:
A superforecaster’s credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it’s very low...
The evidence someone provides is far more important than someone’s credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you’re going to post your credence, provide some evidence so that you can update other people’s credences too.
I thought I was fairly clear about what my position is. Credences have internal value (you should generate your own credence). Superforecasters’ credences have external value (their credence should update yours). Uncalibrated random people’s credences don’t have much external value (they shouldn’t shift your credence much). And an argument for your credence should always be given.
I never said vague words are valuable, and in fact I think the opposite.
This is an empirical question. Again, what is the reference class for people providing opinions without having evidence? We could look at all of the unsupported credences on the forum and see how accurate they turned out to be. My guess is that they’re of very little value, for all the reasons I gave in previous comments.
you are concretely making the point that it’s additionally bad for them to give explicit credences!
I demonstrated a situation where a credence without evidence is harmful:
If we have different credences and the set of things I’ve considered is a strict subset of yours, you might update your credence because you mistakenly think I’ve considered something you haven’t.
The only way we can avoid such a situation is either by providing a supporting argument for our credences, OR not updating our credences in light of other people’s unsupported credences.
On the Forum, invalid meta-arguments* are pretty common, such as “people make logic mistakes so you might have too”, rather than actually identifying the weaknesses in an argument.
Here are two claims I’d very much agree with:
It’s often best to focus on object-level arguments rather than meta-level arguments, especially arguments alleging bias
One reason for that is that the meta-level arguments will often apply to a similar extent to a huge number of claims/people. E.g., a huge number of claims might be influenced substantially by confirmation bias.
But you say invalid meta-arguments, and then give the example “people make logic mistakes so you might have too”. That example seems perfectly valid, just often not very useful.
And I’d also say that that example meta-argument could sometimes be useful. In particular, if someone seems extremely confident about something based on a particular chain of logical steps, it can be useful to remind them that there have been people in similar situations in the past who’ve been wrong (though also some who’ve been right). They’re often wrong for reasons “outside their model”, so this person not seeing any reason they’d be wrong doesn’t provide extremely strong evidence that they’re not.
It would be invalid to say, based on that alone, “You’re probably wrong”, but saying they’re plausibly wrong seems both true and potentially useful.
(Also, isn’t your comment primarily meta-arguments of a somewhat similar nature to “people make logic mistakes so you might have too”? I guess your comment is intended to be a bit closer to a specific reference class forecast type argument?)
There’s also a lot of pseudo-superforcasting, like “I have 80% confidence in this”, without any evidence backing up those credences.
Describing that as pseudo-superforecasting feels unnecessarily pejorative. I think such people are just forecasting / providing estimates. They may indeed be inspired by Tetlock’s work or other work with superforecasters, but that doesn’t mean they’re necessarily trying to claim their estimates use the same methodologies or deserve the same weight as superforecasters’ estimates. (I do think there are potential downsides of using explicit probabilities, but I think each potential downside is debatable, and there are also potential upsides, and using seemingly pejorative terms probably doesn’t help.)
Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)
Did you mean “some ideas that are probably correct and very important”? If so, I’d agree. But I wouldn’t want to imply longtermism (and to a lesser extent moral uncertainty) are simply “correct” (rather than “quite likely” or “what we should act based on, given (meta)moral uncertainty and expected value reasoning.
but outside of that, I’d say we’re just as wrong as anyone else.
I’d disagree with that. I definitely don’t think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.
But you say invalid meta-arguments, and then give the example “people make logic mistakes so you might have too”. That example seems perfectly valid, just often not very useful.
My definition of an invalid argument contains “arguments that don’t reliably differentiate between good and bad arguments”. “1+1=2″ is also a correct statement, but that doesn’t make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using “invalid” incorrectly here.
And I’d also say that that example meta-argument could sometimes be useful.
Yes, if someone believed that having a logical argument is a guarantee, and they’ve never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That’s fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they’ve made. And since most people who are proficient with logic already know that logic arguments can be unsound, it’s not useful to reiterate that point to them.
Also, isn’t your comment primarily meta-arguments of a somewhat similar nature to “people make logic mistakes so you might have too”?
It is, but as I said, “Some meta-arguments are valid”. (I can describe how I delineate between valid and invalid meta-arguments if you wish.)
Describing that as pseudo-superforecasting feels unnecessarily pejorative.
Ah sorry, I didn’t mean to offend. If they were superforecasters, their credence alone would update mine. But they’re probably not, so I don’t understand why they give their credence without a supporting argument.
Did you mean “some ideas that are probably correct and very important”?
The set of things I give 100% credence is very, very small (i.e. claims that are true even if I’m a brain in a vat). I could say “There’s probably a table in front of me”, which is technically more correct than saying that there definitely is, but it doesn’t seem valuable to qualify every statement like that.
Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of “correct” changes and nobody is ever wrong. I think “nobody is ever wrong” is highly unlikely, especially because you can point to logical contradictions in people’s moral beliefs (not just unintuitive conclusions). At that point, it’s not worth mentioning the uncertainty I have.
I definitely don’t think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.
Yeah, I’m too focused on the errors. I’ll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they’re going to bump up the average, even outside of EA’s central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.
Why am I confident in moral uncertainty? People do update their morality over time [...]
Oh, when you said “Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)”, I assumed (perhaps mistakenly) that by “moral uncertainty” you meant something vaguely like the idea that “We should take moral uncertainty seriously, and think carefully about how best to handle it, rather than necessarily just going with whatever moral theory currently seems best to us.”
So not just the idea that we can’t be certain about morality (which I’d be happy to say is just “correct”), but also the idea that that fact should change our behaviour is substantial ways. I think that both of those ideas are surprisingly rare outside of EA, but the latter one is rarer, and perhaps more distinctive to EA (though not unique to EA, as there are some non-EA philosophers who’ve done relevant work in that area).
On my “inside-view”, the idea that we should “take moral uncertainty seriously” also seems extremely hard to contest. But I move a little away from such confidence, and probably wouldn’t simply call it “correct”, due to the fact that most non-EAs don’t seem to explicitly endorse something clearly like that idea. (Though maybe they endorse somewhat similar ideas in practice, even just via ideas like “agree to disagree”.)
Oh, I meant pessimistic. A reason for a weak update might similar to Gell-Man amnesia effect. After putting effort into classical arguments you noticed some important flaws. The fact that have not been articulated before suggests that collective EA epistemology is weaker than expected. Because of that one might get less certain about quality of arguments in other EA domains.
I’d say nearly everyone’s ability to determine an argument’s strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as “people make logic mistakes so you might have too”, rather than actually identifying the weaknesses in an argument. There’s also a lot of pseudo-superforcasting, like “I have 80% confidence in this”, without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually understanding how they work. Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.), but outside of that, I’d say we’re just as wrong as anyone else.
*Some meta-arguments are valid, like discussions on logical grounding of particular methodologies, e.g. “Falsification works because of the law of contraposition, which follows from the definition of logical implication”.
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don’t provide additional evidence, if only to avoid problems of ambiguous language.
I’m not sure I know what you mean by this.
I’d agree that you’re definitely not obligated to provide more evidence, and that your credence does fully capture how likely you think it is that X will happen.
But it seems to me that the evidence that informed your credence can also be very useful information for people, both in relation to how much they should update their own credences (as they may have info you lack regarding how relevant and valid those pieces of evidence are), and in relation to how—and how much—you might update your views (e.g., if they find out you just thought for 5 seconds and went with your gut, vs spending a year building expertise and explicit models). It also seems like sharing that evidence could help them with things like building their general models of the world or of how to make estimates.
(This isn’t an argument against giving explicit probabilities that aren’t based on much or that aren’t accompanied by explanations of what they’re based on. I’m generally, though tentatively, in favour of that. It just seems like also explaining what the probabilities are based on is often quite useful.)
(By the way, Beard et al. discuss related matters in the context of existential risk estimates, using the term “evidential reasoning”.)
This is in contrast to a frequentist perspective, or maybe something close to a “common-sense” perspective, which tends to bucket knowledge into separate categories that aren’t easily interchangeable.
Many people make a mental separation between “thinking something is true” and “thinking something is X% likely, where X is high”, with one falling into the category of lived experience, and the other falling into the category of “scientific or probabilistic assessment”. The first one doesn’t require any externalizable evidence and is a fact about the mind, the second is part of a collaborative scientific process that has at its core repeatable experiments, or at least recurring frequencies (i.e. see the frequentist discussion of it being meaningless to assign probabilities to one-time events).
Under some of these other non-bayesian interpretations of probability theory, an assignment of probabilities is not valid if you don’t associate it with either an experimental setup, or some recurring frequency. So under those interpretations you do have an additional obligation to provide evidence and context to your probability estimates, since otherwise they don’t really form even a locally valid statement.
Thanks for that answer. So just to check, you essentially just meant that it’s ok to provide credences without saying your evidence—i.e., you’re not obligated to provide evidence when you provide credences? Not that there’s no added value to providing your evidence alongside your credences?
If so, I definitely agree.
(And it’s not that your original statement seemed to clearly say something different, just that I wasn’t sure that that’s all it was meant to mean.)
Yep, that’s what I was implying.
This statement is just incorrect.
Sure there is: By communicating, we’re trying to update one another’s credences. You’re not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone’s credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there’s no need for supporting evidence.
Ambiguous statements are bad, 100%, but so are clear, baseless statements.
As you say, people can legitimately have credences about anything. It’s how people should think. But if you’re going to post your credence, provide some evidence so that you can update other people’s credences too.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn’t necessarily describe a problematic phenomenon. (See Greg Lewis’s recent post; I’m not sure if you disagree.). The latter claim would be very worrying if true, but I don’t see reason to believe that it is. Sure, EAs sometimes lack good reasons for the views they espouse, but this is a general phenomenon unrelated to the practice of reporting credences explicitly.
Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment “There’s also a lot of pseudo-superforcasting … without any evidence backing up those credences.” I didn’t say “without stating any evidence backing up those credences.” This is not a guess on my part. I’ve seen comments where they say explicitly that the credence they’re giving is a first impression, and not something well thought out. It’s fine for them to have a credence, but why should anyone care what your credence is if it’s just a first impression?
I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.
Yes, I think it’s a lot worse. Consider the two statements:
And
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what “pretty sure” means in common language), but ought to have drastically different implications for behavior!
I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them. Otherwise, it’s difficult to know if you were “really” wrong, even after checking hundreds of claims!
Yes you’re right. But I’m making a distinction between people’s own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says “I haven’t thought much about it”, it should be an indicator to not update your own credence by very much at all.
I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they’re 90% confident without providing any reasoning that justifies their position. All I’m asking for is for people to provide a good argument along with their credence.
I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they’re more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.
I’m curious if you agree or disagree with this claim:
With a specific operationalization like:
It’s almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get “double counted” (and there’s “flow on” effects where the first person who updates another person’s credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It’s a mistake for you to then say “I’ll update my credence a few percent because FCCC might have other evidence.” For this reason, providing supporting arguments is a net benefit, irrespective of EA’s accuracy of forecasts.
I don’t find your arguments persuasive for why people should give reasoning in addition to credences. I think posting reasoning is on the margin of net value, and I wish more people did it, but I also acknowledge that people’s time is expensive so I understand why they choose not to. You list reasons why giving reasoning is beneficial, but not reasons for why it’s sufficient to justify the cost.
My question probing predictive ability of EAs earlier was an attempt to set right what I consider to be an inaccuracy in the internal impressions EAs have about the ability of superforecasters. In particular, it’s not obvious to me that we should trust the judgments of superforecasters substantially more than we trust the judgments of other EAs.
My view is that giving explicit, quantitative credences plus stating the supporting evidence is typically better than giving explicit, quantitative credences without stating the supporting evidence (at least if we ignore time costs, information hazards, etc.), which is in turn typically better than giving qualitative probability statements (e.g., “pretty sure”) without stating the supporting evidence, and often better than just saying nothing.
Does this match your view?
In other words, are you essentially just arguing that “providing supporting arguments is a net benefit”?
I ask because I had the impression that you were arguing that it’s bad for people to give explicit, quantitative credences if they aren’t also giving their supporting evidence (and that it’d be better for them to, in such cases, either use qualitative statements or just say nothing). Upon re-reading the thread, I got the sense that others may have gotten that impression too, but also I don’t see you explicitly make that argument.
Basically, yeah.
But I do think it’s a mistake to update your credence based off someone else’s credence without knowing their argument and without knowing whether they’re calibrated. We typically don’t know the latter, so I don’t know why people are giving credences without supporting arguments. It’s fine to have a credence without evidence, but why are people publicising such credences?
I’d agree with a modified version of your claim, along the following lines: “You should update more based on someone’s credence if you have more reason to believe their credence will track the truth, e.g. by knowing they’ve got good evidence (even if you haven’t actually seen the evidence) or knowing they’re well-calibrated. There’ll be some cases where you have so little reason to believe their credence will track the truth that, for practical purposes, it’s essentially not worth updating.”
But your claim at least sounds like it’s instead that some people are calibrated while others aren’t (a binary distinction), and when people aren’t calibrated, you really shouldn’t update based on their credences at all (at least if you haven’t seen their arguments).
I think calibration increases in a quantitative, continuous way, rather than switching from off to on. So I think we should just update on credences more the more calibrated the person they’re from is.
Does that sound right to you?
I mean, very frequently it’s useful to just know what someone’s credence is. That’s often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence. This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.
I agree, but only if they’re a reliable forecaster. A superforecaster’s credence can shift my credence significantly. It’s possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it’s not practical for them to provide all the evidence, and you are right.
But most people are poor forecasters (and sometimes they explicitly state they have no supporting evidence other than their intuition), so I see no reason to update my credence just because someone I don’t know is confident. If the credence of a random person has any value to my own credence, it’s very low.
That would depend on the question. Sometimes we’re interested in feelings for their own sake. That’s perfectly legitimate because the actual evidence we’re wanting is the data about their feelings. But if someone’s giving their feelings about whether there are an infinite number of primes, it doesn’t update my credences at all.
I think opinions without any supporting argument worsen discourse. Imagine a group of people thoughtfully discussing evidence, then someone comes in, states their feelings without any evidence, and then leaves. That shouldn’t be taken seriously. Increasing the proportion of those people only makes it worse.
Bayesians should want higher-quality evidence. Isn’t self-reported data is unreliable? And that’s when the person was there when the event happened. So what is the reference class for people providing opinions without having evidence? It’s almost certainly even more unreliable. If someone has an argument for their credence, they should usually give that argument; if they don’t have an argument, I’m not sure why they’re adding to the conversation.
I’m not saying we need to provide peer-reviewed articles. I just want to see some line of reasoning demonstrating why you came to the conclusion you made, so that everyone can examine your assumptions and inferences. If we have different credences and the set of things I’ve considered is a strict subset of yours, you might update your credence because you mistakenly think I’ve considered something you haven’t.
Yes, but unreliability does not mean that you instead just use vague words instead of explicit credences. It’s a fine critique to say that people make too many arguments without giving evidence (something I also disagree with, but that isn’t the subject of this thread), but you are concretely making the point that it’s additionally bad for them to give explicit credences! But the credences only help, compared to vague and ambiguous terms that people would use instead.
I’m not sure how you think that’s what I said. Here’s what I actually said:
I thought I was fairly clear about what my position is. Credences have internal value (you should generate your own credence). Superforecasters’ credences have external value (their credence should update yours). Uncalibrated random people’s credences don’t have much external value (they shouldn’t shift your credence much). And an argument for your credence should always be given.
I never said vague words are valuable, and in fact I think the opposite.
This is an empirical question. Again, what is the reference class for people providing opinions without having evidence? We could look at all of the unsupported credences on the forum and see how accurate they turned out to be. My guess is that they’re of very little value, for all the reasons I gave in previous comments.
I demonstrated a situation where a credence without evidence is harmful:
The only way we can avoid such a situation is either by providing a supporting argument for our credences, OR not updating our credences in light of other people’s unsupported credences.
Here are two claims I’d very much agree with:
It’s often best to focus on object-level arguments rather than meta-level arguments, especially arguments alleging bias
One reason for that is that the meta-level arguments will often apply to a similar extent to a huge number of claims/people. E.g., a huge number of claims might be influenced substantially by confirmation bias.
(Here are two relevant posts.)
Is that what you meant?
But you say invalid meta-arguments, and then give the example “people make logic mistakes so you might have too”. That example seems perfectly valid, just often not very useful.
And I’d also say that that example meta-argument could sometimes be useful. In particular, if someone seems extremely confident about something based on a particular chain of logical steps, it can be useful to remind them that there have been people in similar situations in the past who’ve been wrong (though also some who’ve been right). They’re often wrong for reasons “outside their model”, so this person not seeing any reason they’d be wrong doesn’t provide extremely strong evidence that they’re not.
It would be invalid to say, based on that alone, “You’re probably wrong”, but saying they’re plausibly wrong seems both true and potentially useful.
(Also, isn’t your comment primarily meta-arguments of a somewhat similar nature to “people make logic mistakes so you might have too”? I guess your comment is intended to be a bit closer to a specific reference class forecast type argument?)
Describing that as pseudo-superforecasting feels unnecessarily pejorative. I think such people are just forecasting / providing estimates. They may indeed be inspired by Tetlock’s work or other work with superforecasters, but that doesn’t mean they’re necessarily trying to claim their estimates use the same methodologies or deserve the same weight as superforecasters’ estimates. (I do think there are potential downsides of using explicit probabilities, but I think each potential downside is debatable, and there are also potential upsides, and using seemingly pejorative terms probably doesn’t help.)
Did you mean “some ideas that are probably correct and very important”? If so, I’d agree. But I wouldn’t want to imply longtermism (and to a lesser extent moral uncertainty) are simply “correct” (rather than “quite likely” or “what we should act based on, given (meta)moral uncertainty and expected value reasoning.
I’d disagree with that. I definitely don’t think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.
My definition of an invalid argument contains “arguments that don’t reliably differentiate between good and bad arguments”. “1+1=2″ is also a correct statement, but that doesn’t make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using “invalid” incorrectly here.
Yes, if someone believed that having a logical argument is a guarantee, and they’ve never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That’s fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they’ve made. And since most people who are proficient with logic already know that logic arguments can be unsound, it’s not useful to reiterate that point to them.
It is, but as I said, “Some meta-arguments are valid”. (I can describe how I delineate between valid and invalid meta-arguments if you wish.)
Ah sorry, I didn’t mean to offend. If they were superforecasters, their credence alone would update mine. But they’re probably not, so I don’t understand why they give their credence without a supporting argument.
The set of things I give 100% credence is very, very small (i.e. claims that are true even if I’m a brain in a vat). I could say “There’s probably a table in front of me”, which is technically more correct than saying that there definitely is, but it doesn’t seem valuable to qualify every statement like that.
Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of “correct” changes and nobody is ever wrong. I think “nobody is ever wrong” is highly unlikely, especially because you can point to logical contradictions in people’s moral beliefs (not just unintuitive conclusions). At that point, it’s not worth mentioning the uncertainty I have.
Yeah, I’m too focused on the errors. I’ll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they’re going to bump up the average, even outside of EA’s central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.
Oh, when you said “Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)”, I assumed (perhaps mistakenly) that by “moral uncertainty” you meant something vaguely like the idea that “We should take moral uncertainty seriously, and think carefully about how best to handle it, rather than necessarily just going with whatever moral theory currently seems best to us.”
So not just the idea that we can’t be certain about morality (which I’d be happy to say is just “correct”), but also the idea that that fact should change our behaviour is substantial ways. I think that both of those ideas are surprisingly rare outside of EA, but the latter one is rarer, and perhaps more distinctive to EA (though not unique to EA, as there are some non-EA philosophers who’ve done relevant work in that area).
On my “inside-view”, the idea that we should “take moral uncertainty seriously” also seems extremely hard to contest. But I move a little away from such confidence, and probably wouldn’t simply call it “correct”, due to the fact that most non-EAs don’t seem to explicitly endorse something clearly like that idea. (Though maybe they endorse somewhat similar ideas in practice, even just via ideas like “agree to disagree”.)