It’s common for people to make tradeoffs between their selfish and altruistic goals with a rule of thumb or pledge like “I want to donate X% of my income to EA causes” or “I spend X% of my time doing EA direct work” where X is whatever they’re comfortable with. But among more dedicated EAs where X>>50, maybe a more useful mantra is “I want to produce at least Y% of the expected altruistic impact that I would if I totally optimized my life for impact”. Some reasons why this might be good:
Impact is ultimately what we care about, not sacrifice. The new framing shifts people out of a mindset of zero-sum tradeoffs between a selfish and altruistic part.
In particular, this promotes ambition. Thoughts similar to this have helped me realize that by being more ambitious, I can double my impact without sacrificing much personal well-being. This is a much better thing to do than working 70 hours a week at an “EA job” because I think my commitment level is X=85% or something.
It also helps me not stress about small things. My current diet is to avoid chicken, eat fewer eggs, and offset any eggs I eat. Some people around me are vegan, and some people think offsetting is antithetical to EA. Even though neither group was pushy I would have spent more energy being stressed/sad if I didn’t have the Y mindset, even though I personally believe that diet is a pretty small part of my impact.
Some reasons it might be bad (not a complete list, since I’m biased in favor):
Creating unhealthy dynamics between people trying to prove they have the highest value of Y. This is also a problem with X, but Y is more subjective.
Even Y=20% is really hard to achieve. This is motivating to me but might discourage some people.
I think I like the thinking that’s in this general direction, but just to list some additional counter-considerations:
almost all of the predictable difference in your realized impact from your theoretical maximum would be due to contigent factors outside of your control.
You can try to solve this problem somewhat by saying Y% of your ex ante expected value
But it’s hard (but not impossible) to avoid problems with evidential updates here (like there’ll be situations where your policy prevents you from seeking evidential updates)
the toy example that comes to mind is that unless you’re careful, this policy would in theory prevent you from learning about much more ambitious things you could’ve done, because that’d be evidence that your theoretical maximum is much higher than you’ve previously thought!
The subjectivity of Y is problematic not just for interpersonal dynamics, but from a motivational perspective.Because it’s so hard to know both the numerator and especially denominator, the figures may be too noisy to optimize for/have a clean target to aim at.
In practice I think this hasn’t been too much of a problem for me, and I can easily switch from honest evaluation mode to execution mode. Curious if other people have different experiences.
It’s common for people to make tradeoffs between their selfish and altruistic goals with a rule of thumb or pledge like “I want to donate X% of my income to EA causes” or “I spend X% of my time doing EA direct work” where X is whatever they’re comfortable with. But among more dedicated EAs where X>>50, maybe a more useful mantra is “I want to produce at least Y% of the expected altruistic impact that I would if I totally optimized my life for impact”. Some reasons why this might be good:
Impact is ultimately what we care about, not sacrifice. The new framing shifts people out of a mindset of zero-sum tradeoffs between a selfish and altruistic part.
In particular, this promotes ambition. Thoughts similar to this have helped me realize that by being more ambitious, I can double my impact without sacrificing much personal well-being. This is a much better thing to do than working 70 hours a week at an “EA job” because I think my commitment level is X=85% or something.
It also helps me not stress about small things. My current diet is to avoid chicken, eat fewer eggs, and offset any eggs I eat. Some people around me are vegan, and some people think offsetting is antithetical to EA. Even though neither group was pushy I would have spent more energy being stressed/sad if I didn’t have the Y mindset, even though I personally believe that diet is a pretty small part of my impact.
Some reasons it might be bad (not a complete list, since I’m biased in favor):
Creating unhealthy dynamics between people trying to prove they have the highest value of Y. This is also a problem with X, but Y is more subjective.
Even Y=20% is really hard to achieve. This is motivating to me but might discourage some people.
I think I like the thinking that’s in this general direction, but just to list some additional counter-considerations:
almost all of the predictable difference in your realized impact from your theoretical maximum would be due to contigent factors outside of your control.
You can try to solve this problem somewhat by saying Y% of your ex ante expected value
But it’s hard (but not impossible) to avoid problems with evidential updates here (like there’ll be situations where your policy prevents you from seeking evidential updates)
the toy example that comes to mind is that unless you’re careful, this policy would in theory prevent you from learning about much more ambitious things you could’ve done, because that’d be evidence that your theoretical maximum is much higher than you’ve previously thought!
The subjectivity of Y is problematic not just for interpersonal dynamics, but from a motivational perspective.Because it’s so hard to know both the numerator and especially denominator, the figures may be too noisy to optimize for/have a clean target to aim at.
In practice I think this hasn’t been too much of a problem for me, and I can easily switch from honest evaluation mode to execution mode. Curious if other people have different experiences.