It seems to me that many people have intuitions in the direction of “it’s extremely hard to know with any confidence anything about the eventual consequences of our actions”. The place these intuitions are coming from provides some support for at least two problems for trying to do good in the world:
(1) Maybe we just have so little idea that even in principle the idea of trying to choose actions aiming at getting good eventual consequences is misguided.
(2) The massive amounts of uncertainty around consequences mean that doing good is a very hard problem, and that a key part of pursuing it well is finding strategies which are somewhat robust to this uncertainty.
In some sense (2) is a weaker version of the concern (1), and it only looks attractive to address conditional on concern (1) not biting.
What should these be called? I think (1) is almost always called cluelessness, and (2) is sometimes called cluelessness, but it seems like it would be helpful to have distinct terms to refer to them. Also on my perspective (1) is a reasonable thing to worry about but it looks like the concern ultimately doesn’t stand up, whereas I think that (2) is perhaps the central problem for the effective altruist project, so I’m particularly interested in having a good name for (2).
“The epistemic hurdle” (to doing good) feels catchy and like it captures (2). Not sure it’s actually good, but I wanted to leave it here for you to judge.
“Epistemic hurdle” is nicely concise, and I like the corresponding mental image of EAs who are ready to run to do good, but need to overcome the barrier of (2).
I suggest that (1) should be called “the problem of absolute cluelessness” and that (2) should be called “the practical problem of cluelessness”.
When context is clear one could drop the adjective. My suspicion is that with time (1) will come to be regarded as a solved problem, and (2) will still want a lot of attention. I think it’s fine/desirable if at that point it gets to use the pithier term of “cluelessness”. I also think that it’s probably good if (1) and (2) have names which make it clear that there’s a link between them. I think there may be a small transition cost from current usage, but (a) there just isn’t that much total use of the terms now, and (b) current usage seems inconsistent about whether it includes (2).
I agree that this distinction is important and that it would be good to have two terms for these different concepts.
I see the motivation for terms like “weak cluelessness” or “the practical problem of cluelessness”. To me it sounds slightly odd to use the word “clueless” for (2), however, given the associations that word has (cf. Cambridge dictionary).
(1) is not a gradable concept—if we’re clueless, then in Hilary Greaves’ words, we “can never have even the faintest idea” which of two actions is better.
(2), on the other hand, is a gradable concept—it can be more or less difficult to find the best strategies. Potentially it would be good to have a term that is gradable, for that reason.
One possibility is something relating to (un)predictability or (un)foreseeability. That has the advantage that it relates to forecasting.
(Note that absolute cluelessness can also be expressed in terms of (un)predictability—you can say that it’s totally unpredictable which strategies have the highest impact.)
In everyday language I actually think this fits passably well. The dictionary gives the definition “having no knowledge of something”. For (2) I feel like informally I’d be happy with someone saying that the problem is we have no knowledge of how our actions will turn out, so long as they clarified that they didn’t mean absolutely no knowledge. Of course this isn’t perfect; I’d prefer they said “basically no knowledge” in the first place. But it’s also the case that informally “clueless” is often modified with superlatives (e.g. “totally”, “completely”), so I think that a bare “clueless” doesn’t really connote having no idea at all.
Yeah, I’m unsure. I think that the term “clueless” is usually used to refer to people who are incompetent (cf. the synonyms). (That’s why they have no knowledge.) But in this case we don’t lack knowledge because we’re incompetent, but because the task at hand is hard. And one might consider using a term or phrase that implies that. But there are pros and cons of all candidates.
I think this is a good point which I wasn’t properly appreciating. It doesn’t seem particularly worse for (2) than for (1), except insofar as terminology is more locked in for (1) than (2).
Of course, a possible advantage of “clueless” is that it strikes a self-deprecating tone; if we’re worried about being perceived as arrogant then having the language err on the side of assigning blame to ourselves rather than the universe might be a small help
I like “opaqueness” for the reason that it is gradable.
I appreciate you making this distinction. Although I find that it all the more makes me want to use one term (e.g. clueless) for (2), and a modified version (absolutely clueless, or totally clueless, or perhaps infinitely clueless) for (1). I think that the natural relation between the two concepts is that (1) is something like a limiting case of (2) taken to the extreme, so it’s ideal if the terminology reflects that.
A couple of other remarks around this:
I think the fact that “totally clueless” is a common English phrase suggests that “clueless” is grammatically seen as a gradable concept.
I agree that in principle (2) is a gradable concept so we might want to have language that can express it.
In practice my instinct is that most of the time one will point at the problem and put attention on possible responses, and it won’t be that helpful to discuss exactly how severe the problem is.
However, I like the idea that being able to express gradations might make it easier to notice the concept of gradations.
I can dream that eventually we could find a natural metric for degree-of-cluelessness …
Hmm, I’m unsure whether the link to forecasting is more of an advantage or a disadvantage. It’s suggestive of the idea that one deals with the problem by becoming better at forecasting, which I think is something which is helpful, but probably only a small minority of how we should address it.
I agree that that shouldn’t be the main strategy. But my sense is that this issue isn’t a disadvantage of using a term like “predictability” or a synonym.
I think one advantage of such a term is that it relates to major areas of research, that many people know about.
Another term is “uncertainty”; cf. “radical uncertainty”.
I think that bare terms like “unpredictability” or particularly “uncertainty” are much too weak; they don’t properly convey the degree of epistemic challenge, and hence don’t pick out what’s unusual about the problem situation that we’re grappling with.
“Unforseeability” is a bit stronger, but still seems rather too weak. I think “unknowability”, “radical uncertainty”, and “cluelessness” are all in the right ballpark for their connotations.
I do think “unknowability” for (2) and “absolute/total unknowability” for (1) is an interesting alternative. Using “unknowable” rather than “clueless” puts the emphasis on the decision situation rather than the agent; I’m not sure whether that’s better.
Yeah, I agree that one would need to add some adjective (e.g. “total” or “radical”) to several of these.
“Unknowability” sounds good at first glance; I’d need to think about use cases.
I see now that you made the agent-decision situation distinction that I also made above. I do think that “unknowable” putting an emphasis on the decision situation is to its advantage.
Could also go for tractable and intractable cluelessness?
Also I wonder if we should be distinguishing between empirical and moral cluelessness—with the former being about claims about consequences and the latter about fundamental ethical claims.
Some alternatives in a similar vein: (1) = strong cluelessness / (2) = weak cluelessness (1) = total cluelessness / (2) = partial cluelessness
I guess I kind of like the word “practical” for (2), to point to the fact that it isn’t the type of thing that will have a clean philosophical resolution.
I’ve mentioned in a different thread that we could refer to them as (1) aleatory versus (2) epistemic.
I would propose reusing the closely related idea of aleatory and epistemic uncertainty for cluelessness.
Type 1 is aleatory, i.e. truly random, impossible to reduce, and fundamental. Type 2 is epistemic, i.e. we do not yet have the tools to fix it, but in theory it can be fixed. And this relates to what I’ve called the aleatory baseline problem in forecasting—it’s unclear how much of a prediction is irreducible uncertainty, and how much is just expensive to forecast.
Off-topic, but can you give an example of irreducible uncertainty? I’ve been thinking that, technically, all uncertainty is epistemic uncertainty and that what people call aleatoric uncertainty is really just epistemic uncertainty that is quite expensive to reduce.
There are trivial examples, like when the decay of a given uranium atom will occur, but it seems likely there are macroscopic phenomena that are also irreducibly uncertain over time.
For instance, it’s probably the case that long-term weather prediction is fundamentally impossible past some point. Currently, we use 10-meter grids for simulating atmospheric dynamics, and have decent precision out to 2 weeks. But if we knew the positions / velocities / temperatures of every particle in the atmosphere as of today, let’s say, to 2 decimal places, (alongside future solar energy input fluctuations, temperature of the earth, etc.) we could in theory simulate it in full detail to know what things would be like in, say, a month—but we would lose precision over time, and because weather is a chaotic system, more than a couple months in the future, the loss of precision would be so severe that we would have essentially no information. And at some point, the degree of precision needed to extend how long we can predict hits hard limits due to quantum uncertainties, at which point we have fundamental reasons to think it’s impossible to know more.
Quantum randomness seems aleatory, so anything that depends on that to a large extent (everything depends on that to some extent) would probably also fit the term.
One of the challenges is that “absolute cluelessness” is a precise claim: beyond some threshold of impact scale or time, we can never have any ability to predict the overall moral consequences of any action.
By contrast, the practical problem is not as a precise claim, except perhaps as a denial of “absolute cluelessness.”
After thinking about it for a while, I suggest “problem of non-absolute cluelessness.” After all, isn’t it the idea that we are not clueless about the long term future, and therefore that we have a responsibility to predict and shape it for the good, that is the source of the problem? If we were absolutely clueless, then we would not have that responsibility and would not face that problem.
So I might vote for “absolutely clueless” and “non-absolutely clueless” to describe the state of being, and the “problem of absolute cluelessness” and “problem of non-absolute cluelessness” to describe the respective philosophical problems.
“Partial” might work instead of “non-absolute,” but I still favor the latter even though it’s bulkier. I like that “non-absolute” points to a challenge that arises when our predictive powers are nonzero, even if they are very slim indeed. By contrast, “partial” feels more aligned with the everyday problem of reasoning under uncertainty.