Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:
“Choose effective altruist endeavors that also grant you selfish benefits. There are a number of standard human motivators. Status, friends, mates, money, fame. When these things are on the line work actually gets done. Without these things it’s a lot harder. If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you. If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it. If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want. We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want. Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized. If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose.”
Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:
“Choose effective altruist endeavors that also grant you selfish benefits. There are a number of standard human motivators. Status, friends, mates, money, fame. When these things are on the line work actually gets done. Without these things it’s a lot harder. If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you. If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it. If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want. We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want. Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized. If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose.”
This strikes me as incredibly good advice.