It seems that everyone in EA / EA-adjacent circles who is not OP or EVF needs to be wary to some extent. If no one is on the lookout for these sorts of situations and no one is going to be indemnifying many EA individuals and entities, then other people/entities need to clearly understand that and take appropriate action to protect their own interests in the future.
All this sounds like a step back from a higher-trust environment in certain respects. For instance, it’s certainly appropriate for OP to “fund EA efforts opportunistically, in situations where it seems to help both parties, [without wanting] to be seen as having any long-term obligations or such.” That seems more like a transactional relationship. People in transactional relationships do not generally defer to their counterpart(ies) concerning the common good, count on them to be looking out for their own needs, and so on.
It’s possible that an “opportunistic[]” approach that is not “responsible for . . . . the EA community” is the right strategy for OP to pursue. But there are costs to efficiency, personal/smaller-institutional risk tolerance, morale, and so forth to a more transactional / opportunistic approach to the EA community.
I also imagine that these groups would largely agree. Like, if one were to ask OP/EVF, “do you think the EA community should be well-off to develop infrastructure so it doesn’t have to rely that much on you two”, I could imagine them being quite positive about this.
(That said, I imagine they might be less enthusiastic about certain actual implementations of this, especially ones that might get in the way of their other plans.)
It seems that everyone in EA / EA-adjacent circles who is not OP or EVF needs to be wary to some extent. If no one is on the lookout for these sorts of situations and no one is going to be indemnifying many EA individuals and entities, then other people/entities need to clearly understand that and take appropriate action to protect their own interests in the future.
All this sounds like a step back from a higher-trust environment in certain respects. For instance, it’s certainly appropriate for OP to “fund EA efforts opportunistically, in situations where it seems to help both parties, [without wanting] to be seen as having any long-term obligations or such.” That seems more like a transactional relationship. People in transactional relationships do not generally defer to their counterpart(ies) concerning the common good, count on them to be looking out for their own needs, and so on.
It’s possible that an “opportunistic[]” approach that is not “responsible for . . . . the EA community” is the right strategy for OP to pursue. But there are costs to efficiency, personal/smaller-institutional risk tolerance, morale, and so forth to a more transactional / opportunistic approach to the EA community.
Agreed!
I also imagine that these groups would largely agree. Like, if one were to ask OP/EVF, “do you think the EA community should be well-off to develop infrastructure so it doesn’t have to rely that much on you two”, I could imagine them being quite positive about this.
(That said, I imagine they might be less enthusiastic about certain actual implementations of this, especially ones that might get in the way of their other plans.)