Thanks for this excellent write up and later exchanges in the comments. It was very educational for me. A quick thought, written on my phone.
It strikes me that a common failure mode in EA and elsewhere is to assume transparency of reasoning/causality; to wrongly assume that others will understand what you are trying to do and why. A solution to that issue might be to recommend that proponents of (significant) new initiatives should generally share a (short) theory of change, similar to Sam’s, in advance or alongside their idea. I think that Michael Aird made a similar argument, but for organisations only, I think.
I’ll add that while I like this idea, it might be too demanding. It could be excessively cost to innovation and speed (e.g., people might not do good things because they don’t want the hassle of having to post and debate the theory of change).
Thanks for this excellent write up and later exchanges in the comments. It was very educational for me. A quick thought, written on my phone.
It strikes me that a common failure mode in EA and elsewhere is to assume transparency of reasoning/causality; to wrongly assume that others will understand what you are trying to do and why. A solution to that issue might be to recommend that proponents of (significant) new initiatives should generally share a (short) theory of change, similar to Sam’s, in advance or alongside their idea. I think that Michael Aird made a similar argument, but for organisations only, I think.
I’ll add that while I like this idea, it might be too demanding. It could be excessively cost to innovation and speed (e.g., people might not do good things because they don’t want the hassle of having to post and debate the theory of change).