I was too vague in my response here: By āthe responsible conclusionā, I mean something like āwhat seems like a good norm for discussing an individual projectā rather than āwhat you should conclude in your own mindā.
I agree on silent success vs. silent failure and would update in the same way you would upon seeing silence from a project where I expected a legible output.
If the book isnāt published in my example, it seems more likely that some mundane thing went poorly (e.g. book wasnāt good enough to publish) than that the author got cancer or found a higher-impact opportunity. But if I were reporting an evaluation, I would still write something more like āI couldnāt find information on this, and Iām not sure what happenedā than āI couldnāt find information on this, and the grant probably failedā.
(Of course, Iām more likely to assume and write about genuine failure based on certain factors: a bigger grant, a bigger team, a higher expectancy of a legible result, etc. If EA Funds makes a $1m grant to CFAR to share their work with the world, and CFARās website has vanished three years later, I wouldnāt be shy about evaluating that grant.)
Iām more comfortable drawing judgments about an overall grant round. If there are ten grants, and seven of them are āno info, not sure what happenedā, that seems like strong evidence that most of the grants didnāt work out, even if Iām not past the threshold of calling any individual grant a failure. I could see writing something like: āI couldnāt find information on seven of the ten grants where I expected to see results; while Iām not sure what happened in any given case, this represents much less public output than I expected, and Iāve updated negatively about the expected impact of the fundās average grant as a result.ā
(Not that Iām saying an average grant necessarily should have a legible positive impact; hits-based giving is a thing. But all else being equal, more silence is a bad sign.)
I was too vague in my response here: By āthe responsible conclusionā, I mean something like āwhat seems like a good norm for discussing an individual projectā rather than āwhat you should conclude in your own mindā.
I agree on silent success vs. silent failure and would update in the same way you would upon seeing silence from a project where I expected a legible output.
If the book isnāt published in my example, it seems more likely that some mundane thing went poorly (e.g. book wasnāt good enough to publish) than that the author got cancer or found a higher-impact opportunity. But if I were reporting an evaluation, I would still write something more like āI couldnāt find information on this, and Iām not sure what happenedā than āI couldnāt find information on this, and the grant probably failedā.
(Of course, Iām more likely to assume and write about genuine failure based on certain factors: a bigger grant, a bigger team, a higher expectancy of a legible result, etc. If EA Funds makes a $1m grant to CFAR to share their work with the world, and CFARās website has vanished three years later, I wouldnāt be shy about evaluating that grant.)
Iām more comfortable drawing judgments about an overall grant round. If there are ten grants, and seven of them are āno info, not sure what happenedā, that seems like strong evidence that most of the grants didnāt work out, even if Iām not past the threshold of calling any individual grant a failure. I could see writing something like: āI couldnāt find information on seven of the ten grants where I expected to see results; while Iām not sure what happened in any given case, this represents much less public output than I expected, and Iāve updated negatively about the expected impact of the fundās average grant as a result.ā
(Not that Iām saying an average grant necessarily should have a legible positive impact; hits-based giving is a thing. But all else being equal, more silence is a bad sign.)