After listening to the rest of that post with James, I’ll flag that while I agree that “EA is a lot like what many would call an ideology”, I disagree with some of the content in the second half.
I think using tools like ethnography, agent based modeling, and Phenomenology, could be neat, but to me, they’re pretty low-priority in improvements to EA now. I’d imagine it could take some serious effort in any ($200k? $300? Someone strong would have to come along with a proposal first) to produce something that really changes decision making, and I can think of other things I’d prefer that money be spent on.
There seems to be some assumption that the reason why such actions weren’t taken by EA was because EAs weren’t at all familiar and didn’t read James’ post. I think that often a more likely reason is just because it’s a lot of work to do things, we have limited resources, and we have a lot of other really important initiatives to do. Often decision makers have a decent sense of a lot of potential actions, and have decided against them for decent reasons.
Similarly, I don’t feel like the argument brought forth against the use of the word “aligned” when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it’s really easy to error on the side of “overfit on specific background beliefs” or “underfit on specific background beliefs”, and tricky to strike a balance.
My impression is that critics of “EA Orthodoxy” basically always have some orthodoxy of their own. For example, I imagine few would say we should openly welcome Nazi sympathizers, as an extreme example. If they really have no orthodoxy, and are okay with absolutely any position, I’d find this itself an extreme and unusual position that almost all listeners would disagree with.
Personally I feel exhausted by the last few months of what I felt like was much some firestorm of angry criticism. Much of it, mainly from the media and Twitter, feels like it was very antagonistic and in poor taste. At the same time, I think our movement has a whole lot of improvement to do.
I feel the same. Hopefully with this podcast I can increase the percentage of EA criticisms that is constructive and fun-to-engage-with.
My guess is that 70%+ of critiques are pretty bad (as is the case for most fields). I’d likewise be curious about your ability to push back on the bad stuff, or maybe better, to draw out information to highlight potential issues. Frustratingly though, I imagine people will join your podcast and share things in inverse proportion to how much you call them out. (This is a big challenge podcasts have)
I agree, although I think that some subset of the low quality criticism can be steel manned into valid points that may not have come up in an internal brainstorming session. And yes I am still experimenting with how much push back to give, and the first and second episodes are quite different on that metric.
Similarly, I don’t feel like the argument brought forth against the use of the word “aligned” when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it’s really easy to error on the side of “overfit on specific background beliefs” or “underfit on specific background beliefs”, and tricky to strike a balance.
I think this is fair, and I honestly don’t have a good solution. I think the word “aligned” can point to a real and important thing in the world but also has the risk of in practice just being used to point to the ingroup.
After listening to the rest of that post with James, I’ll flag that while I agree that “EA is a lot like what many would call an ideology”, I disagree with some of the content in the second half.
I think using tools like ethnography, agent based modeling, and Phenomenology, could be neat, but to me, they’re pretty low-priority in improvements to EA now. I’d imagine it could take some serious effort in any ($200k? $300? Someone strong would have to come along with a proposal first) to produce something that really changes decision making, and I can think of other things I’d prefer that money be spent on.
There seems to be some assumption that the reason why such actions weren’t taken by EA was because EAs weren’t at all familiar and didn’t read James’ post. I think that often a more likely reason is just because it’s a lot of work to do things, we have limited resources, and we have a lot of other really important initiatives to do. Often decision makers have a decent sense of a lot of potential actions, and have decided against them for decent reasons.
Similarly, I don’t feel like the argument brought forth against the use of the word “aligned” when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it’s really easy to error on the side of “overfit on specific background beliefs” or “underfit on specific background beliefs”, and tricky to strike a balance.
My impression is that critics of “EA Orthodoxy” basically always have some orthodoxy of their own. For example, I imagine few would say we should openly welcome Nazi sympathizers, as an extreme example. If they really have no orthodoxy, and are okay with absolutely any position, I’d find this itself an extreme and unusual position that almost all listeners would disagree with.
Thank you for both comments! :)
I feel the same. Hopefully with this podcast I can increase the percentage of EA criticisms that is constructive and fun-to-engage-with.
I agree, although I think that some subset of the low quality criticism can be steel manned into valid points that may not have come up in an internal brainstorming session. And yes I am still experimenting with how much push back to give, and the first and second episodes are quite different on that metric.
I think this is fair, and I honestly don’t have a good solution. I think the word “aligned” can point to a real and important thing in the world but also has the risk of in practice just being used to point to the ingroup.