From what I’ve learned about Shapley values so far, this seems to mirror my takeaway.
Nice! To be sure, I want to put an emphasis on any kind of attribution being an unnecessary step in most cases rather than on the infeasibility of computing it.
There is complex cluelessness, nonlinearity from perturbations at perhaps even the molecular level, and a lot of moral uncertainty (because even though I think that evidential cooperation in large worlds can perhaps guide us toward solving ethics, that’ll take enormous research efforts to actually make progress on), so infeasibility is already the bread and butter of EA. In the end we’ll find a way to 80⁄20 it (or maybe −80/20 it, as you point out, and we’ll never know) to not end up paralyzed. I’ve many times just run through mental “simulations” of what I think would’ve happened if any subset of people on my team had not been around, so this 80/20ing is also possible for Shapley values.
If you do retroactive public goods funding, it’s important that the collaborators can, up front, trust that the rewards they’ll receive will be allocated justly, so being able to pay them out in proportion to the Shapley value would be great. But as altruists, we’re only concerned with rewards to the point where we don’t have to worry about our own finances anymore. What we really care about is the impact, and for that it’s not relevant to calculate any attribution.
I do not understand this point but would like to (since the stance I developed in the original post went more in the direction of “EAs are too individualist”).
I might be typical-minding EAs here (based on me and my friends) but my impression is that a lot of EAs are from lefty circles that are very optimistic about the ability of a whole civilization to cooperate and maximize some sort of well-being average. We’ve then just turned to neglectedness as our coordination mechanism rather than long, well-structured meetings, consensus voting, living together and other such classic coordination tools. In theory (or with flexible resources, dominant assurance contracts, and impact markets) that should work fine. Resources pour into campaigns that are deemed relatively neglected until they are not, at which point the resources can go to the new most neglected thing. Eventually nothing will be neglected anymore.
So it seems to me that the spirit is the same one of cooperativeness, community, and collective action. Just the tool we use to coordinate is a new one.
But some 99.9% (total guess) of the population are more individualist than that (well, I’ve only ever lived in WEIRD cultures, so I’m in an obvious bubble). They don’t think in terms of civilizations thriving or succumbing to infighting but in terms of the standing of their family in society or even just their own. (I’m excluding people in poverty here – almost anyone, including most altruists, well behave selfishly when they are in dire straits.)
Shapley values are useful for startups or similar enterprises that have a set goal that everyone works toward. The degree to which they work toward it is a fixed attribute of the collaborator. The core is more about trying to find an attribution split that sets just the right incentives to maximize the number of people who are interested in collaborating in the first place. (I think I’m getting this backwards, but something of this sort. It’s been too long since I research these things.)
If someone is very community- and collective-action-minded, they’ll have the tacit assumption that everyone is working towards the good of the whole community, and they’re just wondering how they can best contribute to that. That’s how I see most EAs.
If someone is very individualistic, they’ll want to travel 10 countries, have 2 kids, drive a car that can accelerate real fast, and get their brain frozen when they die. They’ll have no tacit assumptions about any kind of greater community or their civilization and never think about collective action. But if they did, their question would be what’s in it for them, and if there is something in it for them, if they can conspire with a smaller set of collaborators to get more of it. They’ll turn to cooperative game theory, crunch the numbers, and then pick out just the right co-conspiritors to form a subcoalition.
So that’s the intuition behind that overly terse remark in my last message. ^.^
Off topic: I have a badly structured, hastily written post where I argue that it’s not optimal for EAs to focus maximally on the one thing where they can contribute most (AI safety, animal rights, etc.) and neglect everything else but that it’s probably better to cooperate with all other efforts in their immediate environment that they endorse to at least the extent to which the median person in the environment cooperates with them. Or else we’re all (slightly) sabotaging each other all the time and get less change in aggregate. I feel like mainstream altruists (a small percentage of the population) do this better than some EAs, and it seems conceptually similar to individualism.
Very glad to read that, thank you for deciding to add that piece to your comment :)!
Nice! To be sure, I want to put an emphasis on any kind of attribution being an unnecessary step in most cases rather than on the infeasibility of computing it.
There is complex cluelessness, nonlinearity from perturbations at perhaps even the molecular level, and a lot of moral uncertainty (because even though I think that evidential cooperation in large worlds can perhaps guide us toward solving ethics, that’ll take enormous research efforts to actually make progress on), so infeasibility is already the bread and butter of EA. In the end we’ll find a way to 80⁄20 it (or maybe −80/20 it, as you point out, and we’ll never know) to not end up paralyzed. I’ve many times just run through mental “simulations” of what I think would’ve happened if any subset of people on my team had not been around, so this 80/20ing is also possible for Shapley values.
If you do retroactive public goods funding, it’s important that the collaborators can, up front, trust that the rewards they’ll receive will be allocated justly, so being able to pay them out in proportion to the Shapley value would be great. But as altruists, we’re only concerned with rewards to the point where we don’t have to worry about our own finances anymore. What we really care about is the impact, and for that it’s not relevant to calculate any attribution.
I might be typical-minding EAs here (based on me and my friends) but my impression is that a lot of EAs are from lefty circles that are very optimistic about the ability of a whole civilization to cooperate and maximize some sort of well-being average. We’ve then just turned to neglectedness as our coordination mechanism rather than long, well-structured meetings, consensus voting, living together and other such classic coordination tools. In theory (or with flexible resources, dominant assurance contracts, and impact markets) that should work fine. Resources pour into campaigns that are deemed relatively neglected until they are not, at which point the resources can go to the new most neglected thing. Eventually nothing will be neglected anymore.
So it seems to me that the spirit is the same one of cooperativeness, community, and collective action. Just the tool we use to coordinate is a new one.
But some 99.9% (total guess) of the population are more individualist than that (well, I’ve only ever lived in WEIRD cultures, so I’m in an obvious bubble). They don’t think in terms of civilizations thriving or succumbing to infighting but in terms of the standing of their family in society or even just their own. (I’m excluding people in poverty here – almost anyone, including most altruists, well behave selfishly when they are in dire straits.)
Shapley values are useful for startups or similar enterprises that have a set goal that everyone works toward. The degree to which they work toward it is a fixed attribute of the collaborator. The core is more about trying to find an attribution split that sets just the right incentives to maximize the number of people who are interested in collaborating in the first place. (I think I’m getting this backwards, but something of this sort. It’s been too long since I research these things.)
If someone is very community- and collective-action-minded, they’ll have the tacit assumption that everyone is working towards the good of the whole community, and they’re just wondering how they can best contribute to that. That’s how I see most EAs.
If someone is very individualistic, they’ll want to travel 10 countries, have 2 kids, drive a car that can accelerate real fast, and get their brain frozen when they die. They’ll have no tacit assumptions about any kind of greater community or their civilization and never think about collective action. But if they did, their question would be what’s in it for them, and if there is something in it for them, if they can conspire with a smaller set of collaborators to get more of it. They’ll turn to cooperative game theory, crunch the numbers, and then pick out just the right co-conspiritors to form a subcoalition.
So that’s the intuition behind that overly terse remark in my last message. ^.^
Off topic: I have a badly structured, hastily written post where I argue that it’s not optimal for EAs to focus maximally on the one thing where they can contribute most (AI safety, animal rights, etc.) and neglect everything else but that it’s probably better to cooperate with all other efforts in their immediate environment that they endorse to at least the extent to which the median person in the environment cooperates with them. Or else we’re all (slightly) sabotaging each other all the time and get less change in aggregate. I feel like mainstream altruists (a small percentage of the population) do this better than some EAs, and it seems conceptually similar to individualism.
Awww! :-D