I disagree and think that b is actually totally sufficient justification. I’m taking as an assumption that we’re using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying career they could. I’m fairly sure I could be earning a lot more than I currently am if that was my main goal. But I consider the value of my labour from an altruistic perspective to exceed the additional money I could be donating and therefore do not see myself to have a significant additional ethical obligation to donate (though I do donate a fraction of my income anyway because I want to)
By foregoing a large amount of income for altruistic reasons, I think such people are spending a large amount of their resource budget on altruistic purposes, and that if they still have an obligation to donate more money that people in higher paying careers should be obliged to donate far more. Which is a consistent position, but not one I hold
I don’t want to argue in anyone’s specific case, but I don’t think it’s universally true at all or even true the majority of the time that people that those working in AI could make more elsewhere. It sounds nice to say, but I think often people are earning more in AI jobs than they would elsewhere .
My reasoning was roughly that the machine learning skill set is also extremely employable in finance which tends to pay better. though openai salaries do get pretty high nowadays and if you value openai and anthropic equity at notably above their current market value, then plausibly, they’re higher paying. Definitely agreed it’s not universal.
I also want to point out that having better outside income-maximizing options makes you more financially secure than other people in your income bracket, all else equal, which pro tanto would give you more reason to donate than them.
My point is that “other people in the income bracket AFTER taking a lower paying job” is the wrong reference class.
Let’s say someone is earning $10mn/year in finance. I totally think they should donate some large fraction of their income. But I’m pretty reluctant to argue that they should donate more than 99% of it. So it seems completely fine to have a post donation income above $100K, likely far above.
If this person quits to take a job in AI Safety that pays $100K/year, because they think this is more impactful than their donations, I think it would be unreasonable to argue that they need to donate some of their reduced salary, because then their “maximum acceptable post donation salary” has gone down, even though they’re (hopefully) having more impact than if they donated everything above $100K
I’m picking fairly extreme numbers to illustrate the point, but the key point is that choosing to do direct work should not reduce your “maximum acceptable salary post donations”, and that at least according to my values, that max salary post donation is often above what they get paid in their new direct role.
I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.
I want to achieve two things (which I expect you will agree with).
I want to “capture” the good done by anyone and everyone willing to contribute and I want them welcomed, accepted and appreciated by the EA community. This means that if a person who could earn $10m/year in finance and is “only” willing to contribute $1m/year (10%) to effective causes, I don’t want them turned away.
I want to encourage, inspire, motivate and push people to do better than they currently are (insofar as it’s possible). I think that includes an Anthropic employee earning $500k/year doing mech interp, a quant trader earning $10m/year, a new grad deciding what to do with their career and a 65-year old who just heard of EA.
I think it’s also reasonable for people to set limits for how much they are willing to do.
This is reasonable. I think the key point that I want to defend is that it seems wrong to say that choosing a more impactful job should mean you ought to have a lower post donation salary.
I personally think of it in terms of having some minimum obligation for doing your part (which I set at 10% by default), plus encouragement (but not obligation) to do significant amounts more good if you want to
My point is that “other people in the income bracket AFTER taking a lower paying job” is the wrong reference class.
Is there a single appropriate reference class here, as opposed to looking at multiple reference classes and weighting the results in some manner?
I agree that similarly situated person who decided to take a very high-paying job is a relevant reference class and should get some weight. However, it doesn’t follow that person with similar incomes working a non-impactful job is an irrelevant reference class or should get zero weight.
As Marcus notes, “[p]eople don’t choose to be smart enough to do ML work.” I would add that people don’t choose other factors that promote or inhibit their ability to choose a very high-paying job and/or a high-impact job (e.g., location and circumstances of birth, health, family obligations, etc.) In a pair of persons who are similarly situated economically, giving the more advantaged person a total pass on the moral obligation to donate money seems problematic to me. In this frame of reference, their advantages allowed them to land a more impactful job at the same salary as the less advantaged person—and in a sense we would be excusing them from a moral obligation because they are advantaged. (Giving the more privileged person a big break is also going to make it rather hard to establish substantial giving as a norm in the broader community, but that’s probably not in the scope of the question here.)
I don’t have a clear opinion on how to weight the two reference classes beyond an intuition that both classes should get perceptible weight. (It also seems plausible there are other reference classes to weigh as well, although I haven’t thought about what they might be.)
My argument is essentially that “similar income, non impactful job” is as relevant a reference class to the “similar income, impactful job person” as it is as a reference class to the “high income, non impactful job” person. I also personally think reference classes is the wrong way to think about it. If taking a more impactful job also makes someone obliged to take on a lower post donation salary (when they don’t have to), I feel like something has gone wrong, and the incentives are not aligned with doing the most good.
My point is that, even though there’s a moral obligation, unless you think that high earning people in finance should be donating a very large fraction of their salary (so their post donation pay is less than the pay in AI safety), their de facto moral obligation has increased by the choice to do direct work, which is unreasonable to my eyes.
I would also guess that at least most people doing safety work at industry labs could get a very well paying role at a top tier finance firm? The talent bar is really high nowadays
I think I want to give (b) partial credit here in general. There may not be much practical difference between partial and full credit where the financial delta between a more altruistic job and a higher-salary job is high enough. But there are circumstances in which it might make a difference.
Without commenting on any specific person’s job or counterfactuals, I think it is often true that the person working a lower-paid but more meaningful job secures non-financial benefits not available from the maximum-salary job and/or avoids non-financial sacrifices associated with the maximum-salary job. Depending on the field, these could include lower stress, more free time, more pleasant colleagues, more warm fuzzies / psychological satisfaction, and so on. If Worker A earns 100 currency units doing psychologically meaningful, low to optimal stress work but similarly situated Worker B earns 200 units doing unpleasant work with little in the way of non-monetary benefits, treating the entire 100 units Worker A forewent as spent out of their resource budget on altruistic purposes does not strike a fair balance between Worker A and Worker B.
I disagree and think that b is actually totally sufficient justification. I’m taking as an assumption that we’re using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying career they could. I’m fairly sure I could be earning a lot more than I currently am if that was my main goal. But I consider the value of my labour from an altruistic perspective to exceed the additional money I could be donating and therefore do not see myself to have a significant additional ethical obligation to donate (though I do donate a fraction of my income anyway because I want to)
By foregoing a large amount of income for altruistic reasons, I think such people are spending a large amount of their resource budget on altruistic purposes, and that if they still have an obligation to donate more money that people in higher paying careers should be obliged to donate far more. Which is a consistent position, but not one I hold
I don’t want to argue in anyone’s specific case, but I don’t think it’s universally true at all or even true the majority of the time that people that those working in AI could make more elsewhere. It sounds nice to say, but I think often people are earning more in AI jobs than they would elsewhere .
My reasoning was roughly that the machine learning skill set is also extremely employable in finance which tends to pay better. though openai salaries do get pretty high nowadays and if you value openai and anthropic equity at notably above their current market value, then plausibly, they’re higher paying. Definitely agreed it’s not universal.
Sure. But the average person working in AI is not at Jane St level like you and yes, OpenAI/Anthropic comp is extremely high.
I would also say that people still have a moral obligation. People don’t choose to be smart enough to do ML work.
I also want to point out that having better outside income-maximizing options makes you more financially secure than other people in your income bracket, all else equal, which pro tanto would give you more reason to donate than them.
My point is that “other people in the income bracket AFTER taking a lower paying job” is the wrong reference class.
Let’s say someone is earning $10mn/year in finance. I totally think they should donate some large fraction of their income. But I’m pretty reluctant to argue that they should donate more than 99% of it. So it seems completely fine to have a post donation income above $100K, likely far above.
If this person quits to take a job in AI Safety that pays $100K/year, because they think this is more impactful than their donations, I think it would be unreasonable to argue that they need to donate some of their reduced salary, because then their “maximum acceptable post donation salary” has gone down, even though they’re (hopefully) having more impact than if they donated everything above $100K
I’m picking fairly extreme numbers to illustrate the point, but the key point is that choosing to do direct work should not reduce your “maximum acceptable salary post donations”, and that at least according to my values, that max salary post donation is often above what they get paid in their new direct role.
I understand this. Good analogy.
I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.
I want to achieve two things (which I expect you will agree with).
I want to “capture” the good done by anyone and everyone willing to contribute and I want them welcomed, accepted and appreciated by the EA community. This means that if a person who could earn $10m/year in finance and is “only” willing to contribute $1m/year (10%) to effective causes, I don’t want them turned away.
I want to encourage, inspire, motivate and push people to do better than they currently are (insofar as it’s possible). I think that includes an Anthropic employee earning $500k/year doing mech interp, a quant trader earning $10m/year, a new grad deciding what to do with their career and a 65-year old who just heard of EA.
I think it’s also reasonable for people to set limits for how much they are willing to do.
This is reasonable. I think the key point that I want to defend is that it seems wrong to say that choosing a more impactful job should mean you ought to have a lower post donation salary.
I personally think of it in terms of having some minimum obligation for doing your part (which I set at 10% by default), plus encouragement (but not obligation) to do significant amounts more good if you want to
Is there a single appropriate reference class here, as opposed to looking at multiple reference classes and weighting the results in some manner?
I agree that similarly situated person who decided to take a very high-paying job is a relevant reference class and should get some weight. However, it doesn’t follow that person with similar incomes working a non-impactful job is an irrelevant reference class or should get zero weight.
As Marcus notes, “[p]eople don’t choose to be smart enough to do ML work.” I would add that people don’t choose other factors that promote or inhibit their ability to choose a very high-paying job and/or a high-impact job (e.g., location and circumstances of birth, health, family obligations, etc.) In a pair of persons who are similarly situated economically, giving the more advantaged person a total pass on the moral obligation to donate money seems problematic to me. In this frame of reference, their advantages allowed them to land a more impactful job at the same salary as the less advantaged person—and in a sense we would be excusing them from a moral obligation because they are advantaged. (Giving the more privileged person a big break is also going to make it rather hard to establish substantial giving as a norm in the broader community, but that’s probably not in the scope of the question here.)
I don’t have a clear opinion on how to weight the two reference classes beyond an intuition that both classes should get perceptible weight. (It also seems plausible there are other reference classes to weigh as well, although I haven’t thought about what they might be.)
My argument is essentially that “similar income, non impactful job” is as relevant a reference class to the “similar income, impactful job person” as it is as a reference class to the “high income, non impactful job” person. I also personally think reference classes is the wrong way to think about it. If taking a more impactful job also makes someone obliged to take on a lower post donation salary (when they don’t have to), I feel like something has gone wrong, and the incentives are not aligned with doing the most good.
My point is that, even though there’s a moral obligation, unless you think that high earning people in finance should be donating a very large fraction of their salary (so their post donation pay is less than the pay in AI safety), their de facto moral obligation has increased by the choice to do direct work, which is unreasonable to my eyes.
I would also guess that at least most people doing safety work at industry labs could get a very well paying role at a top tier finance firm? The talent bar is really high nowadays
I think I want to give (b) partial credit here in general. There may not be much practical difference between partial and full credit where the financial delta between a more altruistic job and a higher-salary job is high enough. But there are circumstances in which it might make a difference.
Without commenting on any specific person’s job or counterfactuals, I think it is often true that the person working a lower-paid but more meaningful job secures non-financial benefits not available from the maximum-salary job and/or avoids non-financial sacrifices associated with the maximum-salary job. Depending on the field, these could include lower stress, more free time, more pleasant colleagues, more warm fuzzies / psychological satisfaction, and so on. If Worker A earns 100 currency units doing psychologically meaningful, low to optimal stress work but similarly situated Worker B earns 200 units doing unpleasant work with little in the way of non-monetary benefits, treating the entire 100 units Worker A forewent as spent out of their resource budget on altruistic purposes does not strike a fair balance between Worker A and Worker B.
Yeah this is fair