I don’t have much time and don’t expect much attention regardless of my time input to writing about this topic. It is boring, frankly. I am a boring writer. The best that I can do is keep it short.
Altruistic value is not objectively measurable. If a creature like God existed, then she could judge the altruistic value of actions in terms of their consequences. Everyone else makes do with unreliable mental models that are bound by uncertain future circumstances.
As a brief thought experiment, if you have a sense that an action (for example, a large donation to a reliable effective charity) is altruistic, then you have made a judgement of the altruistic value of that donation. Other actions, in fact, all actions, are vulnerable to the same thought experiment. The only result is to make explicit what you already think.
I could offer my sense of true failings of the EA community to make better judgements among specific available options of behavior in certain situations, but those would be context bound, controversial, and with results that I don’t think would be worth my time. Besides, I don’t care, per se, whether the EA community continues to have blind spots about certain common evil actions and continues to perform them. It’s a big world.
I just heard about this contest and thought, hmmm, how to summarize a helpful suggestion for improvement to EA, a little thought experiment of my own.
Sorry I could not put in the effort that I see others do here, but I promise you that my efforts are well-intended and sincere.
Thank you, Karthik
I don’t have much time and don’t expect much attention regardless of my time input to writing about this topic. It is boring, frankly. I am a boring writer. The best that I can do is keep it short.
Altruistic value is not objectively measurable. If a creature like God existed, then she could judge the altruistic value of actions in terms of their consequences. Everyone else makes do with unreliable mental models that are bound by uncertain future circumstances.
As a brief thought experiment, if you have a sense that an action (for example, a large donation to a reliable effective charity) is altruistic, then you have made a judgement of the altruistic value of that donation. Other actions, in fact, all actions, are vulnerable to the same thought experiment. The only result is to make explicit what you already think.
I could offer my sense of true failings of the EA community to make better judgements among specific available options of behavior in certain situations, but those would be context bound, controversial, and with results that I don’t think would be worth my time. Besides, I don’t care, per se, whether the EA community continues to have blind spots about certain common evil actions and continues to perform them. It’s a big world.
I just heard about this contest and thought, hmmm, how to summarize a helpful suggestion for improvement to EA, a little thought experiment of my own.
Sorry I could not put in the effort that I see others do here, but I promise you that my efforts are well-intended and sincere.