In addition to retirement planning, if you’re down with transhumanism, consider attempting to maximize your lifespan so you can personally enjoy the fruits of x-risk reduction (and get your selfish & altruistic selves on the same page). Here’s a list of tips.
With regard to early retirement, an important question is how you’d spend your time if you were to retire early. I recently argued that more EAs should be working at relaxed jobs or saving up funds in order to work on “projects”, to solve problems that are neither dollar-shaped nor career-shaped (note: this may be a self-serving argument since this is an idea that appeals to me personally).
I can’t speak for other people, but I’ve been philosophically EA for something like 10 years now. I started from a position of extreme self-sacrifice and have basically been updating continuously away from that for the past 10 years. A handwavey argument for this: If we expect impact to have a Pareto distribution, a big concern should be maximizing the probability that you’re able to have a 100x or more impact relative to baseline. In order to have that kind of impact, you will want to learn a way to operate at peak performance, probably for extended periods of time. Peak performance looks different for different people, but I’m skeptical of any lifestyle that feels like it’s grinding you down rather than building you up. (This book has some interesting ideas.)
In principle, I don’t think there needs to be a big tradeoff between selfish and altruistic motives. Selfishly, it’s nice to have a purpose that gives your life meaning, and EA does that much better than anything else I’ve found. Altruistically, being miserable is not great for productivity.
One form of self-sacrifice I do endorse is severely limiting “superstimuli” like video games, dessert, etc. I find that after allowing my “hedonic treadmill” to adjust for a few weeks, this doesn’t actually represent much of a sacrifice. Here are some thoughts on getting this to work.
In addition to retirement planning, if you’re down with transhumanism, consider attempting to maximize your lifespan so you can personally enjoy the fruits of x-risk reduction (and get your selfish & altruistic selves on the same page). Here’s a list of tips.
With regard to early retirement, an important question is how you’d spend your time if you were to retire early. I recently argued that more EAs should be working at relaxed jobs or saving up funds in order to work on “projects”, to solve problems that are neither dollar-shaped nor career-shaped (note: this may be a self-serving argument since this is an idea that appeals to me personally).
I can’t speak for other people, but I’ve been philosophically EA for something like 10 years now. I started from a position of extreme self-sacrifice and have basically been updating continuously away from that for the past 10 years. A handwavey argument for this: If we expect impact to have a Pareto distribution, a big concern should be maximizing the probability that you’re able to have a 100x or more impact relative to baseline. In order to have that kind of impact, you will want to learn a way to operate at peak performance, probably for extended periods of time. Peak performance looks different for different people, but I’m skeptical of any lifestyle that feels like it’s grinding you down rather than building you up. (This book has some interesting ideas.)
In principle, I don’t think there needs to be a big tradeoff between selfish and altruistic motives. Selfishly, it’s nice to have a purpose that gives your life meaning, and EA does that much better than anything else I’ve found. Altruistically, being miserable is not great for productivity.
One form of self-sacrifice I do endorse is severely limiting “superstimuli” like video games, dessert, etc. I find that after allowing my “hedonic treadmill” to adjust for a few weeks, this doesn’t actually represent much of a sacrifice. Here are some thoughts on getting this to work.