I was too vague in my response here: By “the responsible conclusion”, I mean something like “what seems like a good norm for discussing an individual project” rather than “what you should conclude in your own mind”.
I agree on silent success vs. silent failure and would update in the same way you would upon seeing silence from a project where I expected a legible output.
If the book isn’t published in my example, it seems more likely that some mundane thing went poorly (e.g. book wasn’t good enough to publish) than that the author got cancer or found a higher-impact opportunity. But if I were reporting an evaluation, I would still write something more like “I couldn’t find information on this, and I’m not sure what happened” than “I couldn’t find information on this, and the grant probably failed”.
(Of course, I’m more likely to assume and write about genuine failure based on certain factors: a bigger grant, a bigger team, a higher expectancy of a legible result, etc. If EA Funds makes a $1m grant to CFAR to share their work with the world, and CFAR’s website has vanished three years later, I wouldn’t be shy about evaluating that grant.)
I’m more comfortable drawing judgments about an overall grant round. If there are ten grants, and seven of them are “no info, not sure what happened”, that seems like strong evidence that most of the grants didn’t work out, even if I’m not past the threshold of calling any individual grant a failure. I could see writing something like: “I couldn’t find information on seven of the ten grants where I expected to see results; while I’m not sure what happened in any given case, this represents much less public output than I expected, and I’ve updated negatively about the expected impact of the fund’s average grant as a result.”
(Not that I’m saying an average grant necessarily should have a legible positive impact; hits-based giving is a thing. But all else being equal, more silence is a bad sign.)
I was too vague in my response here: By “the responsible conclusion”, I mean something like “what seems like a good norm for discussing an individual project” rather than “what you should conclude in your own mind”.
I agree on silent success vs. silent failure and would update in the same way you would upon seeing silence from a project where I expected a legible output.
If the book isn’t published in my example, it seems more likely that some mundane thing went poorly (e.g. book wasn’t good enough to publish) than that the author got cancer or found a higher-impact opportunity. But if I were reporting an evaluation, I would still write something more like “I couldn’t find information on this, and I’m not sure what happened” than “I couldn’t find information on this, and the grant probably failed”.
(Of course, I’m more likely to assume and write about genuine failure based on certain factors: a bigger grant, a bigger team, a higher expectancy of a legible result, etc. If EA Funds makes a $1m grant to CFAR to share their work with the world, and CFAR’s website has vanished three years later, I wouldn’t be shy about evaluating that grant.)
I’m more comfortable drawing judgments about an overall grant round. If there are ten grants, and seven of them are “no info, not sure what happened”, that seems like strong evidence that most of the grants didn’t work out, even if I’m not past the threshold of calling any individual grant a failure. I could see writing something like: “I couldn’t find information on seven of the ten grants where I expected to see results; while I’m not sure what happened in any given case, this represents much less public output than I expected, and I’ve updated negatively about the expected impact of the fund’s average grant as a result.”
(Not that I’m saying an average grant necessarily should have a legible positive impact; hits-based giving is a thing. But all else being equal, more silence is a bad sign.)