I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).
In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.
Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.
I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).
In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.
Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.