it’s fine, I am not personally offended by these estimates of how much I’m worth (not least because I don’t take them seriously and I actually don’t think you do either, in that faced with a dying baby or a billion dying insects you’d save the baby). On some level I find them very funny. At the very least, however, I would ask this question: can you see how this stuff is sort of bad for EA’s wider reputation, on some quite fundamental level?
It’s very hard to trust someone who thinks you’re only worth 50 chickens or 500 insects or whatever. Sure, you might well suspect he’s lying (to you and himself), and if you know him really well you can probably say with near-certainty his revealed preferences are very different, but from the outside it’s kind of hard to know for sure. Occasionally people do actually believe this stuff! EA already has a trust problem, between FTX and the subsequent Governance Slack leaks, and this sort of thing just compounds it massively.
By default, you shouldn’t really trust a utilitarian/consequentialist because you only need to be the wrong side of their utility/consequences calculations once. Ironically I actually think Will MacAskill, of all people, explicitly acknowledged this problem once upon a time and wrote somewhere about how consequentialists should address it by committing to high ethical standards in their everyday dealings. If I’m remembering rightly, well, how’s that working out....?
But look, deep down, our whole society is built on trust. The law is a last backup when things go wrong, not a first resort, which means if you want to achieve anything meaningful beyond a very tiny set of ideologically identikit fellow travellers you need to be trustworthy. This means that EAs need other people to trust them, and somehow I feel like “oh those are the guys who think I’m worth 50 chickens” just doesn’t really help. Maybe the problem is the messaging here as much as the content, although incredibly long-winded academic arguments that fly in the face of basic common sense don’t really help either.
How would you decide how to prioritize spending between humans and animals in a way that didn’t raise this issue? This feels to me like a disguised argument against any concern for animals whatsoever, since the actual numbers in the comparison aren’t really what’s generating the intuitive repugnance so much as the comparison at all, as evidenced by ‘faced with a dying baby or a billion dying insects you’d save the baby’. Is your view that all animals rights charity is creepy because the money could have been spent on people instead? Or just that making explicit that doing animal rights charity means not helping people instead, and so implies a view about trade-offs is creepy? Lying about why you’re doing what you’re doing is also, by definition, untrustworthy.
I also think what your doing is a bit sleazy here: you’re blurring the line, I think, between ‘even if this is right, it ought to be obvious to you that you should lie about it for PR reasons, you weirdo’ and ‘this is obviously ridiculous, as seen by the fact that normal people disagree’, so that you can’t get pinned by the objections to either individually. (They’re consistent, so you can believe both, but they are distinct.)
The aim of EA is to do the most good, and to attract people who are open to updating their views on how to carry out this project. If the “weirdness” of EA means that we repel people who aren’t committed to doing the most good, that’s probably a good thing. And, I’d be far more trusting of utilitarians/consequentialists within EA than I would be of those who practice common sense morality (who let millions of people die every year), egoists (who’d save themselves even if it meant millions died), deontologists (who wouldn’t lie to protect me from a totalitarian regime), religious fundamentalists, or people who say they want to nuke San Francisco.
Let me renew my offer to talk. DM me for my Calendly link.
it’s fine, I am not personally offended by these estimates of how much I’m worth (not least because I don’t take them seriously and I actually don’t think you do either, in that faced with a dying baby or a billion dying insects you’d save the baby). On some level I find them very funny. At the very least, however, I would ask this question: can you see how this stuff is sort of bad for EA’s wider reputation, on some quite fundamental level?
It’s very hard to trust someone who thinks you’re only worth 50 chickens or 500 insects or whatever. Sure, you might well suspect he’s lying (to you and himself), and if you know him really well you can probably say with near-certainty his revealed preferences are very different, but from the outside it’s kind of hard to know for sure. Occasionally people do actually believe this stuff! EA already has a trust problem, between FTX and the subsequent Governance Slack leaks, and this sort of thing just compounds it massively.
By default, you shouldn’t really trust a utilitarian/consequentialist because you only need to be the wrong side of their utility/consequences calculations once. Ironically I actually think Will MacAskill, of all people, explicitly acknowledged this problem once upon a time and wrote somewhere about how consequentialists should address it by committing to high ethical standards in their everyday dealings. If I’m remembering rightly, well, how’s that working out....?
But look, deep down, our whole society is built on trust. The law is a last backup when things go wrong, not a first resort, which means if you want to achieve anything meaningful beyond a very tiny set of ideologically identikit fellow travellers you need to be trustworthy. This means that EAs need other people to trust them, and somehow I feel like “oh those are the guys who think I’m worth 50 chickens” just doesn’t really help. Maybe the problem is the messaging here as much as the content, although incredibly long-winded academic arguments that fly in the face of basic common sense don’t really help either.
How would you decide how to prioritize spending between humans and animals in a way that didn’t raise this issue? This feels to me like a disguised argument against any concern for animals whatsoever, since the actual numbers in the comparison aren’t really what’s generating the intuitive repugnance so much as the comparison at all, as evidenced by ‘faced with a dying baby or a billion dying insects you’d save the baby’. Is your view that all animals rights charity is creepy because the money could have been spent on people instead? Or just that making explicit that doing animal rights charity means not helping people instead, and so implies a view about trade-offs is creepy? Lying about why you’re doing what you’re doing is also, by definition, untrustworthy.
I also think what your doing is a bit sleazy here: you’re blurring the line, I think, between ‘even if this is right, it ought to be obvious to you that you should lie about it for PR reasons, you weirdo’ and ‘this is obviously ridiculous, as seen by the fact that normal people disagree’, so that you can’t get pinned by the objections to either individually. (They’re consistent, so you can believe both, but they are distinct.)
The aim of EA is to do the most good, and to attract people who are open to updating their views on how to carry out this project. If the “weirdness” of EA means that we repel people who aren’t committed to doing the most good, that’s probably a good thing. And, I’d be far more trusting of utilitarians/consequentialists within EA than I would be of those who practice common sense morality (who let millions of people die every year), egoists (who’d save themselves even if it meant millions died), deontologists (who wouldn’t lie to protect me from a totalitarian regime), religious fundamentalists, or people who say they want to nuke San Francisco.