I have a concept of paradigm error that I find helpful.
A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.
Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.
It is related to what I see as
parameter errors (= the value of parameters being inaccurate)
model errors (= wrong model structure or wrong/missing parameters)
Paradigm errors are one level higher: they are the wrong type of model.
Relevance to EA
I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.
eg in the UK there is often discussion of if LGBT lifestyles should be taught in school and at what age. This makes them seem weird and makes it seem risky. But this is the wrong frame—LGBT lifestyles are typical behaviour (for instance there are more LGBT people than many major world religions). Instead the question is, at what age should you discuss, say, relationships in school. There is already an answer here—I guess children learn about “mummies and daddies” almost immediately. Hence, at the same time you talk about mummies and daddies, you talk about mummies and mummies, and single dads and everything else.
By framing the question differently the answer becomes much clearer. In many cases I think the issue with bad frames (or models) is a category error.
I have a concept of paradigm error that I find helpful.
A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.
Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.
It is related to what I see as
model errors (= wrong model structure or wrong/missing parameters)
Paradigm errors are one level higher: they are the wrong type of model.
Relevance to EA
I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.
I think I call this “the wrong frame”.
”I think you are framing that incorrectly etc”
eg in the UK there is often discussion of if LGBT lifestyles should be taught in school and at what age. This makes them seem weird and makes it seem risky. But this is the wrong frame—LGBT lifestyles are typical behaviour (for instance there are more LGBT people than many major world religions). Instead the question is, at what age should you discuss, say, relationships in school. There is already an answer here—I guess children learn about “mummies and daddies” almost immediately. Hence, at the same time you talk about mummies and daddies, you talk about mummies and mummies, and single dads and everything else.
By framing the question differently the answer becomes much clearer. In many cases I think the issue with bad frames (or models) is a category error.
I like this, I think i use the wrong models when trying to solve challenges in my life.