I would support transparency and explanations of the kind that Hashim already provided. I think the idea that things should be justified by explicit expected value calculations, although it sometimes seems like a core EA idea, is not actually a good one. We usually can’t predict and quantify the kind of outcomes we are trying to achieve, and such attempts are more misguiding than useful.
I would support transparency and explanations of the kind that Hashim already provided. I think the idea that things should be justified by explicit expected value calculations, although it sometimes seems like a core EA idea, is not actually a good one. We usually can’t predict and quantify the kind of outcomes we are trying to achieve, and such attempts are more misguiding than useful.