FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.
That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both âbig dealsâ: AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a âpluripotent far future EAâ to look into AI first, it wouldnât take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)
When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (paceTomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.
As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasnât happened (cf. fairly limited donation trading so far). Nonetheless, Iâd like to offer a few less cynical reasons that draw the balance of my credence.
As you say, although Allison and Bettina should think, âThis is great, by doing this I get to have a better version of me do work on the cause I think is most important!â They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.
It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an âanimal-EAâ for comparative advantage reasons.
It would been (prudentially) better if I could âhackâ my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascalâs wager, where Pascal anticipated the âI canât just change my belief in Godâ point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. âAttitude hackingâ (e.g. I really like research, but Iâd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.
Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.
I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally âprudentially usefulâ for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.
Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles donât need to have a particularly strong or well-reasoned opinion on which cause area is âbestâ in order to do their job extremely well. At which point I expect factors like âdoes the organisation need the particular skills I haveâ, and even straightforward issues like geographical location, to dominate cause prioritisation.
I speculate that the only reason this fact hasnât permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.
Iâd hesitate to extrapolate my experience across to operational roles for the reasons you say. That said, my impression was operations folks place a similar emphasis on these things as I. Tanya Singh (one my colleagues) gave a talk on âx risk/âEA opsâ. From the Q&A (with apologies to Roxanne and Tanya for my poor transcription):
One common retort we get about people who are interested in operations is maybe they donât need to be value-aligned. Surely we can just hire someone who has operations skills but doesnât also buy into the cause. How true do you think this claim is?
I am by no means an expert, but I have a very strong opinion. I think it is extremely important to be values aligned to the cause, because in my narrow slice of personal experience that has led to me being happy, being content, and thatâs made a big difference as to how I approach work. Iâm not sure you can be a crucial piece of a big puzzle or a tightly knit group if you donât buy into the values that everyone is trying to push towards. So I think itâs very very important.
Bravo!
FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.
That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both âbig dealsâ: AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a âpluripotent far future EAâ to look into AI first, it wouldnât take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)
When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (pace Tomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.
As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasnât happened (cf. fairly limited donation trading so far). Nonetheless, Iâd like to offer a few less cynical reasons that draw the balance of my credence.
As you say, although Allison and Bettina should think, âThis is great, by doing this I get to have a better version of me do work on the cause I think is most important!â They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.
It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an âanimal-EAâ for comparative advantage reasons.
It would been (prudentially) better if I could âhackâ my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascalâs wager, where Pascal anticipated the âI canât just change my belief in Godâ point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. âAttitude hackingâ (e.g. I really like research, but Iâd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.
Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.
I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally âprudentially usefulâ for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.
Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles donât need to have a particularly strong or well-reasoned opinion on which cause area is âbestâ in order to do their job extremely well. At which point I expect factors like âdoes the organisation need the particular skills I haveâ, and even straightforward issues like geographical location, to dominate cause prioritisation.
I speculate that the only reason this fact hasnât permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.
Iâd hesitate to extrapolate my experience across to operational roles for the reasons you say. That said, my impression was operations folks place a similar emphasis on these things as I. Tanya Singh (one my colleagues) gave a talk on âx risk/âEA opsâ. From the Q&A (with apologies to Roxanne and Tanya for my poor transcription):
I agree with your last paragraph, but indeed think that you are being unreasonably idealistic :)