Hm… thinking in terms of 2 types of claim doesn’t seem like much of an improvement over thinking in terms of 1 type of claim, honestly. I was not at all trying to say “there are some things we’re really sure of and some things we’re not.” Rather, I was trying to point out that EA is associated with a bunch of different ideas; how solid the footing of each idea is varies a lot, but how those ideas are discussed often doesn’t account for that. And by “how solid” I don’t just mean on a 1-dimensional scale from less to more solid—more like, the relevant evidence and arguments and intuition and so on all vary a ton, so it’s not just a matter of dialing up or down the hedging.
A richer framing for describing this that I like a lot is Holden’s “avante-garde effective altruism” (source):
A general theme of this blog is what I sometimes call avant-garde effective altruism. Effective altruism (EA) is the idea of doing as much good as possible. If EA were jazz, giving to effective charities working on global health would be Louis Armstrong—acclaimed and respected by all, and where most people start. But people who are really obsessed with jazz also tend to like stuff that (to other people) barely even sounds like music, and lifelong obsessive EAs are into causes and topics that are not the first association you’d have with “doing good.” This blog will often be about the latter.
I don’t think it has to be that complicated to work this mindset into how we think and talk about EA in general. E.g. you can start with “There’s reason to believe that different approaches to doing good vary a ton in how much they actually help, so it’s worth spending time and thought on what you’re doing,” then move to “For instance, the massive income gap between countries means that if you’re focusing on reducing poverty, your dollar goes further overseas,” and then from there to “And when people think even more about this, like the EA community has done, there are some more unintuitive conclusions that seem pretty worthy of consideration, for instance...” and then depending on the interaction, there’s space to share ideas in a more contextualized/nuanced way.
That seems like a big improvement over the current default, which seems to be “Hi, we’re the movement of people who figure out how to do the most good, here are the 4 possibilities we’ve come up with, take your pick,” which I agree wouldn’t be improved by “here are the ones that are definitely right, here are the ones we’re not sure about.”
Hm… thinking in terms of 2 types of claim doesn’t seem like much of an improvement over thinking in terms of 1 type of claim, honestly. I was not at all trying to say “there are some things we’re really sure of and some things we’re not.” Rather, I was trying to point out that EA is associated with a bunch of different ideas; how solid the footing of each idea is varies a lot, but how those ideas are discussed often doesn’t account for that. And by “how solid” I don’t just mean on a 1-dimensional scale from less to more solid—more like, the relevant evidence and arguments and intuition and so on all vary a ton, so it’s not just a matter of dialing up or down the hedging.
A richer framing for describing this that I like a lot is Holden’s “avante-garde effective altruism” (source):
I don’t think it has to be that complicated to work this mindset into how we think and talk about EA in general. E.g. you can start with “There’s reason to believe that different approaches to doing good vary a ton in how much they actually help, so it’s worth spending time and thought on what you’re doing,” then move to “For instance, the massive income gap between countries means that if you’re focusing on reducing poverty, your dollar goes further overseas,” and then from there to “And when people think even more about this, like the EA community has done, there are some more unintuitive conclusions that seem pretty worthy of consideration, for instance...” and then depending on the interaction, there’s space to share ideas in a more contextualized/nuanced way.
That seems like a big improvement over the current default, which seems to be “Hi, we’re the movement of people who figure out how to do the most good, here are the 4 possibilities we’ve come up with, take your pick,” which I agree wouldn’t be improved by “here are the ones that are definitely right, here are the ones we’re not sure about.”