Gosh, I wasn’t (explicitly) thinking about branding at all. This is something I’ve been finding useful in my personal ontology, and I actually wasn’t thinking about sharing it publicly until Ben suggested that, I thought “oh that makes sense” and tidied up the notes to post here (with some quick mental checks that they didn’t seem somehow harmful). I’m mildly embarrassed that I hadn’t thought about questions of how it could interact with branding of ideas—but in some recent reflection I realised I was probably underweighting the value of making thinking public even when imperfect, so I’m not certain that there was any meta-level error here.
I think that there are legitimate questions here for me anyway, though: how much does my conception line up with LessWrong-style rationality, and/or why am I not just using that mental bucket? Definitely this is in a similar space. I guess I tend to think of “rationality” as referring to both a goal (think well) and a culture / {set of content designed to facilitate that}. I’m wanting to refer to the objective but without taking much of a stance on the informational content that people should consume to get better at it. I feel like there are lots of people in the world with a lot of elements of good judgement who have never heard of EA/rationality. I want to be able to point to them and what they’re doing well, rather than have something that feels like a particular (niche?) school of thinking, so I don’t really want strong associations with either EA or LessWrong.
Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.
To what extent are you thinking (without so far explicitly saying it) that “good judgment” is a possible EA rebranding of LessWrong-style rationality?
Gosh, I wasn’t (explicitly) thinking about branding at all. This is something I’ve been finding useful in my personal ontology, and I actually wasn’t thinking about sharing it publicly until Ben suggested that, I thought “oh that makes sense” and tidied up the notes to post here (with some quick mental checks that they didn’t seem somehow harmful). I’m mildly embarrassed that I hadn’t thought about questions of how it could interact with branding of ideas—but in some recent reflection I realised I was probably underweighting the value of making thinking public even when imperfect, so I’m not certain that there was any meta-level error here.
I think that there are legitimate questions here for me anyway, though: how much does my conception line up with LessWrong-style rationality, and/or why am I not just using that mental bucket? Definitely this is in a similar space. I guess I tend to think of “rationality” as referring to both a goal (think well) and a culture / {set of content designed to facilitate that}. I’m wanting to refer to the objective but without taking much of a stance on the informational content that people should consume to get better at it. I feel like there are lots of people in the world with a lot of elements of good judgement who have never heard of EA/rationality. I want to be able to point to them and what they’re doing well, rather than have something that feels like a particular (niche?) school of thinking, so I don’t really want strong associations with either EA or LessWrong.
Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.