I don’t think I would have the patience for EA thinking if the spread weren’t big. Why bother with a bunch of sophisticated-looking models and arguments to only make a small improvement in impact? Surely it’s better to just get out there and do good?
Surely it’s better to just get out there and do good?
Depends. As Ben and Aaron explain in their comments, high identifiability should in theory be able to offset low spread. In other words, if the opportunity cost of engaging in EA thinking is small enough, it might be worth engaging in it even if the gain from doing so is also small.
Certainly there’s a risk that it turns into a community wide equivalent of procrastination if the spreads are low. Would love someone to tackle that rigorously and empirically!
I don’t think I would have the patience for EA thinking if the spread weren’t big. Why bother with a bunch of sophisticated-looking models and arguments to only make a small improvement in impact? Surely it’s better to just get out there and do good?
Depends. As Ben and Aaron explain in their comments, high identifiability should in theory be able to offset low spread. In other words, if the opportunity cost of engaging in EA thinking is small enough, it might be worth engaging in it even if the gain from doing so is also small.
Certainly there’s a risk that it turns into a community wide equivalent of procrastination if the spreads are low. Would love someone to tackle that rigorously and empirically!