On the Forum, invalid meta-arguments* are pretty common, such as “people make logic mistakes so you might have too”, rather than actually identifying the weaknesses in an argument.
Here are two claims I’d very much agree with:
It’s often best to focus on object-level arguments rather than meta-level arguments, especially arguments alleging bias
One reason for that is that the meta-level arguments will often apply to a similar extent to a huge number of claims/people. E.g., a huge number of claims might be influenced substantially by confirmation bias.
But you say invalid meta-arguments, and then give the example “people make logic mistakes so you might have too”. That example seems perfectly valid, just often not very useful.
And I’d also say that that example meta-argument could sometimes be useful. In particular, if someone seems extremely confident about something based on a particular chain of logical steps, it can be useful to remind them that there have been people in similar situations in the past who’ve been wrong (though also some who’ve been right). They’re often wrong for reasons “outside their model”, so this person not seeing any reason they’d be wrong doesn’t provide extremely strong evidence that they’re not.
It would be invalid to say, based on that alone, “You’re probably wrong”, but saying they’re plausibly wrong seems both true and potentially useful.
(Also, isn’t your comment primarily meta-arguments of a somewhat similar nature to “people make logic mistakes so you might have too”? I guess your comment is intended to be a bit closer to a specific reference class forecast type argument?)
There’s also a lot of pseudo-superforcasting, like “I have 80% confidence in this”, without any evidence backing up those credences.
Describing that as pseudo-superforecasting feels unnecessarily pejorative. I think such people are just forecasting / providing estimates. They may indeed be inspired by Tetlock’s work or other work with superforecasters, but that doesn’t mean they’re necessarily trying to claim their estimates use the same methodologies or deserve the same weight as superforecasters’ estimates. (I do think there are potential downsides of using explicit probabilities, but I think each potential downside is debatable, and there are also potential upsides, and using seemingly pejorative terms probably doesn’t help.)
Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)
Did you mean “some ideas that are probably correct and very important”? If so, I’d agree. But I wouldn’t want to imply longtermism (and to a lesser extent moral uncertainty) are simply “correct” (rather than “quite likely” or “what we should act based on, given (meta)moral uncertainty and expected value reasoning.
but outside of that, I’d say we’re just as wrong as anyone else.
I’d disagree with that. I definitely don’t think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.
But you say invalid meta-arguments, and then give the example “people make logic mistakes so you might have too”. That example seems perfectly valid, just often not very useful.
My definition of an invalid argument contains “arguments that don’t reliably differentiate between good and bad arguments”. “1+1=2″ is also a correct statement, but that doesn’t make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using “invalid” incorrectly here.
And I’d also say that that example meta-argument could sometimes be useful.
Yes, if someone believed that having a logical argument is a guarantee, and they’ve never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That’s fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they’ve made. And since most people who are proficient with logic already know that logic arguments can be unsound, it’s not useful to reiterate that point to them.
Also, isn’t your comment primarily meta-arguments of a somewhat similar nature to “people make logic mistakes so you might have too”?
It is, but as I said, “Some meta-arguments are valid”. (I can describe how I delineate between valid and invalid meta-arguments if you wish.)
Describing that as pseudo-superforecasting feels unnecessarily pejorative.
Ah sorry, I didn’t mean to offend. If they were superforecasters, their credence alone would update mine. But they’re probably not, so I don’t understand why they give their credence without a supporting argument.
Did you mean “some ideas that are probably correct and very important”?
The set of things I give 100% credence is very, very small (i.e. claims that are true even if I’m a brain in a vat). I could say “There’s probably a table in front of me”, which is technically more correct than saying that there definitely is, but it doesn’t seem valuable to qualify every statement like that.
Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of “correct” changes and nobody is ever wrong. I think “nobody is ever wrong” is highly unlikely, especially because you can point to logical contradictions in people’s moral beliefs (not just unintuitive conclusions). At that point, it’s not worth mentioning the uncertainty I have.
I definitely don’t think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.
Yeah, I’m too focused on the errors. I’ll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they’re going to bump up the average, even outside of EA’s central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.
Why am I confident in moral uncertainty? People do update their morality over time [...]
Oh, when you said “Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)”, I assumed (perhaps mistakenly) that by “moral uncertainty” you meant something vaguely like the idea that “We should take moral uncertainty seriously, and think carefully about how best to handle it, rather than necessarily just going with whatever moral theory currently seems best to us.”
So not just the idea that we can’t be certain about morality (which I’d be happy to say is just “correct”), but also the idea that that fact should change our behaviour is substantial ways. I think that both of those ideas are surprisingly rare outside of EA, but the latter one is rarer, and perhaps more distinctive to EA (though not unique to EA, as there are some non-EA philosophers who’ve done relevant work in that area).
On my “inside-view”, the idea that we should “take moral uncertainty seriously” also seems extremely hard to contest. But I move a little away from such confidence, and probably wouldn’t simply call it “correct”, due to the fact that most non-EAs don’t seem to explicitly endorse something clearly like that idea. (Though maybe they endorse somewhat similar ideas in practice, even just via ideas like “agree to disagree”.)
Here are two claims I’d very much agree with:
It’s often best to focus on object-level arguments rather than meta-level arguments, especially arguments alleging bias
One reason for that is that the meta-level arguments will often apply to a similar extent to a huge number of claims/people. E.g., a huge number of claims might be influenced substantially by confirmation bias.
(Here are two relevant posts.)
Is that what you meant?
But you say invalid meta-arguments, and then give the example “people make logic mistakes so you might have too”. That example seems perfectly valid, just often not very useful.
And I’d also say that that example meta-argument could sometimes be useful. In particular, if someone seems extremely confident about something based on a particular chain of logical steps, it can be useful to remind them that there have been people in similar situations in the past who’ve been wrong (though also some who’ve been right). They’re often wrong for reasons “outside their model”, so this person not seeing any reason they’d be wrong doesn’t provide extremely strong evidence that they’re not.
It would be invalid to say, based on that alone, “You’re probably wrong”, but saying they’re plausibly wrong seems both true and potentially useful.
(Also, isn’t your comment primarily meta-arguments of a somewhat similar nature to “people make logic mistakes so you might have too”? I guess your comment is intended to be a bit closer to a specific reference class forecast type argument?)
Describing that as pseudo-superforecasting feels unnecessarily pejorative. I think such people are just forecasting / providing estimates. They may indeed be inspired by Tetlock’s work or other work with superforecasters, but that doesn’t mean they’re necessarily trying to claim their estimates use the same methodologies or deserve the same weight as superforecasters’ estimates. (I do think there are potential downsides of using explicit probabilities, but I think each potential downside is debatable, and there are also potential upsides, and using seemingly pejorative terms probably doesn’t help.)
Did you mean “some ideas that are probably correct and very important”? If so, I’d agree. But I wouldn’t want to imply longtermism (and to a lesser extent moral uncertainty) are simply “correct” (rather than “quite likely” or “what we should act based on, given (meta)moral uncertainty and expected value reasoning.
I’d disagree with that. I definitely don’t think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.
My definition of an invalid argument contains “arguments that don’t reliably differentiate between good and bad arguments”. “1+1=2″ is also a correct statement, but that doesn’t make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using “invalid” incorrectly here.
Yes, if someone believed that having a logical argument is a guarantee, and they’ve never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That’s fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they’ve made. And since most people who are proficient with logic already know that logic arguments can be unsound, it’s not useful to reiterate that point to them.
It is, but as I said, “Some meta-arguments are valid”. (I can describe how I delineate between valid and invalid meta-arguments if you wish.)
Ah sorry, I didn’t mean to offend. If they were superforecasters, their credence alone would update mine. But they’re probably not, so I don’t understand why they give their credence without a supporting argument.
The set of things I give 100% credence is very, very small (i.e. claims that are true even if I’m a brain in a vat). I could say “There’s probably a table in front of me”, which is technically more correct than saying that there definitely is, but it doesn’t seem valuable to qualify every statement like that.
Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of “correct” changes and nobody is ever wrong. I think “nobody is ever wrong” is highly unlikely, especially because you can point to logical contradictions in people’s moral beliefs (not just unintuitive conclusions). At that point, it’s not worth mentioning the uncertainty I have.
Yeah, I’m too focused on the errors. I’ll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they’re going to bump up the average, even outside of EA’s central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.
Oh, when you said “Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)”, I assumed (perhaps mistakenly) that by “moral uncertainty” you meant something vaguely like the idea that “We should take moral uncertainty seriously, and think carefully about how best to handle it, rather than necessarily just going with whatever moral theory currently seems best to us.”
So not just the idea that we can’t be certain about morality (which I’d be happy to say is just “correct”), but also the idea that that fact should change our behaviour is substantial ways. I think that both of those ideas are surprisingly rare outside of EA, but the latter one is rarer, and perhaps more distinctive to EA (though not unique to EA, as there are some non-EA philosophers who’ve done relevant work in that area).
On my “inside-view”, the idea that we should “take moral uncertainty seriously” also seems extremely hard to contest. But I move a little away from such confidence, and probably wouldn’t simply call it “correct”, due to the fact that most non-EAs don’t seem to explicitly endorse something clearly like that idea. (Though maybe they endorse somewhat similar ideas in practice, even just via ideas like “agree to disagree”.)