How confident are you that the EALF survey respondants were using your relatively-narrow definition of judgment, rather than the dictionary definitions which, as you put it, “seem overly broad, making judgment a central trait almost by definition”?
I ask because scanning the other traits in the survey, they all seem like things where if I use common definitions I consider them useful for some or even many but not all roles, whereas judgment as usually defined is useful ~everywhere, making it unsurprising that it comes out on top. At least, that’s why I’ve never paid attention to this particular part of the EALF survey results in the past.
But I appreciate you’ve probably spoken in person to a number of the EALF people and had a better chance to understand their views, so I’m mostly curious whether you feel those conversations support the idea that the other respondants were thinking of judgment in the narrower way you would use the term.
In the survey, good judgement was defined as “weighing complex information and reaching calibrated conclusions”, which is the same rough definition I was using in my post.
I’m not sure how many people absorbed this definition and used their own definition instead. From talking to people, my impression is that most use ‘judgement’ in a narrower sense than the dictionary definitions, but maybe still broader than my definition.
It’s maybe also worth saying that my impression that judgement is highly valued isn’t just based on the survey—I highlighted that because it’s especially easy to communicate. I also have the impression that people often talk about how it might be improved, how to assess it, as a trait to look for in hiring etc., and it seems to come up more in EA than in most other areas (with certain types of investing maybe the exception).
I’m actually confused about what you mean by your definition. I have an impression about what you mean from your post, but if I try to just go off the wording in your definition I get thrown by “calibrated”. I naturally want to interpret this as something like “assigns confidence levels to their claims that are calibrated”, but that seems ~orthogonal to having the right answer more often, which means it isn’t that large a share of what I care about in this space (and I suspect is not all of what you’re trying to point to).
Now I’m wondering: does your notion of judgement roughly line up with my notion of meta-level judgement? Or is it broader than that?
For one data point, I filled in the EALF survey and had in mind something pretty close to what I wrote about in the post Ben links to. I don’t remember paying much attention to the parenthetical definition—I expect I read it as a reasonable attempt to gesture towards the thing that we all meant when we said “good judgement” (though on a literal reading it’s something much narrower than I think even Ben is talking about).
I think that good judgement in the broad sense is useful ~everywhere, but that:
It’s still helpful to try to understand it, to know better how to evaluate it or improve at it;
For reasons Ben outlines, it’s more important for domains where feedback loops are poor;
The cluster Ben is talking about gets disproportionately more weight in importance for thinking about strategic directions.
How confident are you that the EALF survey respondants were using your relatively-narrow definition of judgment, rather than the dictionary definitions which, as you put it, “seem overly broad, making judgment a central trait almost by definition”?
I ask because scanning the other traits in the survey, they all seem like things where if I use common definitions I consider them useful for some or even many but not all roles, whereas judgment as usually defined is useful ~everywhere, making it unsurprising that it comes out on top. At least, that’s why I’ve never paid attention to this particular part of the EALF survey results in the past.
But I appreciate you’ve probably spoken in person to a number of the EALF people and had a better chance to understand their views, so I’m mostly curious whether you feel those conversations support the idea that the other respondants were thinking of judgment in the narrower way you would use the term.
Hi Alex,
In the survey, good judgement was defined as “weighing complex information and reaching calibrated conclusions”, which is the same rough definition I was using in my post.
I’m not sure how many people absorbed this definition and used their own definition instead. From talking to people, my impression is that most use ‘judgement’ in a narrower sense than the dictionary definitions, but maybe still broader than my definition.
It’s maybe also worth saying that my impression that judgement is highly valued isn’t just based on the survey—I highlighted that because it’s especially easy to communicate. I also have the impression that people often talk about how it might be improved, how to assess it, as a trait to look for in hiring etc., and it seems to come up more in EA than in most other areas (with certain types of investing maybe the exception).
I’m actually confused about what you mean by your definition. I have an impression about what you mean from your post, but if I try to just go off the wording in your definition I get thrown by “calibrated”. I naturally want to interpret this as something like “assigns confidence levels to their claims that are calibrated”, but that seems ~orthogonal to having the right answer more often, which means it isn’t that large a share of what I care about in this space (and I suspect is not all of what you’re trying to point to).
Now I’m wondering: does your notion of judgement roughly line up with my notion of meta-level judgement? Or is it broader than that?
For one data point, I filled in the EALF survey and had in mind something pretty close to what I wrote about in the post Ben links to. I don’t remember paying much attention to the parenthetical definition—I expect I read it as a reasonable attempt to gesture towards the thing that we all meant when we said “good judgement” (though on a literal reading it’s something much narrower than I think even Ben is talking about).
I think that good judgement in the broad sense is useful ~everywhere, but that:
It’s still helpful to try to understand it, to know better how to evaluate it or improve at it;
For reasons Ben outlines, it’s more important for domains where feedback loops are poor;
The cluster Ben is talking about gets disproportionately more weight in importance for thinking about strategic directions.