a lack of any quantifiable knowledge about some possible occurrence
This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.
To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).
The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.
Why this matters
This could drastically change the causes EAs care about and the approaches they take.
This could alter how we judge the value of taking action that affects the future.
This could means that “rationalist”/LessWrong approach of “shut up and multiply” for making decisions might not be correct.
For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.
(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)
See also
The difference between “risk” and “uncertainty”. “Black swan events”. Etc
Knightian uncertainty / deep uncertainty
This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.
To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).
The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.
Why this matters
This could drastically change the causes EAs care about and the approaches they take.
This could alter how we judge the value of taking action that affects the future.
This could means that “rationalist”/LessWrong approach of “shut up and multiply” for making decisions might not be correct.
For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.
(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)
See also
The difference between “risk” and “uncertainty”. “Black swan events”. Etc