The behavior of billionares, which maybe indicates more like 10% of income spent on altruism.
ETA: This is still literally majority selfish, but it’s also plausible that 10% altruism is pretty great and looks pretty different than “current median person behavior with marginal money”.
(See my other comment about the percent of cosmic resources.)
The idea that billionaires have 90% selfish values seems consistent with a claim of having “primarily selfish” values in my opinion. Can you clarify what you’re objecting to here?
The literal words of “primarily selfish” don’t seem that bad, but I would maybe prefer majority selfish?
And your top level comment seems like it’s not talking about/emphasizing the main reason to like human control which is that maybe 10-20% of resources are spent well.
It just seemed odd to me to not mention that “primarily selfish” still involves a pretty big fraction of altruism.
I agree it’s important to talk about and analyze the (relatively small) component of human values that are altruistic. I mostly just think this component is already over-emphasized.
Here’s one guess at what I think you might be missing about my argument: 90% selfish values + 10% altruistic values isn’t the same thing as, e.g., 90% valueless stuff + 10% utopia. The 90% selfish component can have negative effects on welfare from a total utilitarian perspective, that aren’t necessarily outweighed by the 10%.
90% selfish values is the type of thing that produces massive factory farming infrastructure, with a small amount of GDP spent mitigating suffering in factory farms. Does the small amount of spending mitigating suffering outweigh the large amount of spending directly causing suffering? This isn’t clear to me.
(Alternatively, you could think that unaligned AIs will be 100% selfish, and this is clearly worse. But I’d want to understand how you could come to that conclusion, carefully. “Altruism” also encompasses a broad range of activities, and not all of it is utopian or idealistic from a total utilitarian perspective. For example, human spending on environmental conservation might be categorized as “altruism” in this framework, although personally I would say that form of spending is not very “moral” due to wild animal suffering.)
The 90% selfish component can have negative effects on welfare from a total utilitarian perspective, that aren’t necessarily outweighed by the 10%.
Yep, this can be true, but I’m skeptical this will matter much in practice.
I typically think things which aren’t directly optimizing for value or disvalue won’t have intended effects which are very important and that in the future unintended effects (externalities) won’t be that much of total value/disvalue.
When we see the selfish consumption of current very rich people, it doesn’t seem like the intentional effects are that morally good/bad relative to the best/worst uses of resources. (E.g. owning a large boat and having people think you’re high status aren’t that morally important relative to altruistic spending of similar amounts of money.) So for current very rich people the main issue would be that the economic process for producing the goods has bad externalities.
And, I expect that as technology advances, externalities reduce in moral importance relative to intended effects. Partially this is based on crazy transhumanist takes, but I feel like there is some broader perspective in which you’d expect this.
E.g. for factory farming, the ultimately cheapest way to make meat in the limit of technological maturity would very likely not involve any animal suffering.
Separately, I think externalities will probably look pretty similar for selfish resource usage for unaligned AIs and humans because most serious economic activities will be pretty similar.
Alternatively, you could think that unaligned AIs will be 100% selfish, and this is clearly worse.
I’d like to explicitly note that this I don’t think that this is true in expectation for a reasonable notion of “selfish”. Though I maybe think something which is sort of in this direction if we use a relatively narrow notion of altruism.
The behavior of billionares, which maybe indicates more like 10% of income spent on altruism.
ETA: This is still literally majority selfish, but it’s also plausible that 10% altruism is pretty great and looks pretty different than “current median person behavior with marginal money”.
(See my other comment about the percent of cosmic resources.)
The idea that billionaires have 90% selfish values seems consistent with a claim of having “primarily selfish” values in my opinion. Can you clarify what you’re objecting to here?
The literal words of “primarily selfish” don’t seem that bad, but I would maybe prefer majority selfish?
And your top level comment seems like it’s not talking about/emphasizing the main reason to like human control which is that maybe 10-20% of resources are spent well.
It just seemed odd to me to not mention that “primarily selfish” still involves a pretty big fraction of altruism.
I agree it’s important to talk about and analyze the (relatively small) component of human values that are altruistic. I mostly just think this component is already over-emphasized.
Here’s one guess at what I think you might be missing about my argument: 90% selfish values + 10% altruistic values isn’t the same thing as, e.g., 90% valueless stuff + 10% utopia. The 90% selfish component can have negative effects on welfare from a total utilitarian perspective, that aren’t necessarily outweighed by the 10%.
90% selfish values is the type of thing that produces massive factory farming infrastructure, with a small amount of GDP spent mitigating suffering in factory farms. Does the small amount of spending mitigating suffering outweigh the large amount of spending directly causing suffering? This isn’t clear to me.
(Alternatively, you could think that unaligned AIs will be 100% selfish, and this is clearly worse. But I’d want to understand how you could come to that conclusion, carefully. “Altruism” also encompasses a broad range of activities, and not all of it is utopian or idealistic from a total utilitarian perspective. For example, human spending on environmental conservation might be categorized as “altruism” in this framework, although personally I would say that form of spending is not very “moral” due to wild animal suffering.)
Yep, this can be true, but I’m skeptical this will matter much in practice.
I typically think things which aren’t directly optimizing for value or disvalue won’t have intended effects which are very important and that in the future unintended effects (externalities) won’t be that much of total value/disvalue.
When we see the selfish consumption of current very rich people, it doesn’t seem like the intentional effects are that morally good/bad relative to the best/worst uses of resources. (E.g. owning a large boat and having people think you’re high status aren’t that morally important relative to altruistic spending of similar amounts of money.) So for current very rich people the main issue would be that the economic process for producing the goods has bad externalities.
And, I expect that as technology advances, externalities reduce in moral importance relative to intended effects. Partially this is based on crazy transhumanist takes, but I feel like there is some broader perspective in which you’d expect this.
E.g. for factory farming, the ultimately cheapest way to make meat in the limit of technological maturity would very likely not involve any animal suffering.
Separately, I think externalities will probably look pretty similar for selfish resource usage for unaligned AIs and humans because most serious economic activities will be pretty similar.
I’d like to explicitly note that this I don’t think that this is true in expectation for a reasonable notion of “selfish”. Though I maybe think something which is sort of in this direction if we use a relatively narrow notion of altruism.