With regards to “following your comparative advantage”:
Key statement : While “following your comparative advantage” is beneficial as a community norm, it might be less relevant as individual advice.
Imagine 2 people Ann and Ben.
Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network.
Ben has very good career capital to contribute to cause Y.
Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.
Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around.
Both consider retraining for the cause they think is more urgent.
From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner’s dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann’s and Ben’s goals was if each continued in their respective areas of expertise. (Let’s assume they could be motivated to do so)
However, from a personal perspective, let’s look at Ann’s situation:
In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: “I keep contributing to cause X, if you continue contributing to cause Y”
So these might be Ann’s thoughts:
“I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm”
So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.
I see one consideration why Ann should continue working towards cause X:
If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X.
However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.
So it seems that
it clearly makes sense (for CEA/ 80000 hours/ …) to promote such a norm
it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.
All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, …)
Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D
Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.
I think basically you’re right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they’re value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.
I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.
With regards to “following your comparative advantage”:
Key statement : While “following your comparative advantage” is beneficial as a community norm, it might be less relevant as individual advice.
Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.
Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.
From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner’s dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann’s and Ben’s goals was if each continued in their respective areas of expertise. (Let’s assume they could be motivated to do so)
However, from a personal perspective, let’s look at Ann’s situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: “I keep contributing to cause X, if you continue contributing to cause Y”
So these might be Ann’s thoughts: “I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm”
So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.
I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.
So it seems that
it clearly makes sense (for CEA/ 80000 hours/ …) to promote such a norm
it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.
All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, …)
Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D
Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.
Hi there,
I think basically you’re right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they’re value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.
I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.