Here are four more things that I’m somewhat skeptical of and would like someone with more time on their hands and the right brain for the topic to see whether they hold water:
Evidential cooperation in large worlds is ridiculously underexplored considering that it might “solve ethics” as I like to epitomize it. AI safety is arguably more urgent, but maybe it can even inform that discipline in some ways. I have spent about a quarter of a year thinking about ECL, and have come away with the impression that I can almost ignore my own moral intuitions in favor of what little I think I can infer about the compromise utility function. More research is needed.
There is a tension between (1) the rather centralized approach that the EA community has traditionally taken and that is still popular, especially outside key organizations like CEA, and the pervasive failures of planned economies historically, and between (2) the much greater success of Hayakian approaches and the coordination that is necessary to avert catastrophic coordination failures that can end our civilization. My cofounders and I have started an EA org to experiment with market mechanisms for the provision of public and common goods, so we are quite desperate for more thinking of how we and EAs in general should resolve those tensions.
80k and others have amassedevidence that it’s best for hundreds or thousands of people to apply for each EA job, e.g., because the difference between the best and second best candidate are arguably large. I find this counterintuitive. Counterintuitive conclusions are interesting and the ones we’re likely to learn most from, but they are also more often than not wrong. In particular, my intuition is that, as a shallow heuristic, people will do more good if they focus on what is most neglected, all else equal. It seems suspicious that EA jobs should be an exception to this rule. I wonder whether it’s possible to make a case against along the lines of this argument, quantitatively trading off the expected difference between the top and the second best candidate against the risk of pushing someone (the second best candidate 0 to several hops removed) out of the EA community and into AI capabilities research (e.g., because they run out of financial runway), or simply by scrutinizing the studies that 80k’s research is based on.
I think some EAs are getting moral cooperation wrong. I’ve very often heard about instances of this but I can’t readily cite any. A fictional example is, “We can’t attend this workshop on inclusive workplace culture because it delays our work by one hour, which will cause us to lose out on converting 10^13 galaxies into hedonium because of the expansion of space.” This is, in my opinion, what it is like to get moral cooperation a bit less wrong. Obviously, all real examples will be less exaggerated, more subtle, and more defensible too.
Sometimes funders try to play 5d chess with each other to avoid funging each other’s donations, and this results in the charity not getting enough funding.
That seems like it could be a defection in a moral trade, which is likely to burn gains of trade. Often you can just talk to the other funder and split 50:50 or use something awesome like the S-Process.
But I’ve been in the situation where I wanted to make a grant/donation (I was doing ETG), knew of the other donor, but couldn’t communicate with them because they were anonymous to me. Hence I resorted to a bit of proto-ECL: There are two obvious Schelling points, (1) both parties each fill half of the funding gap, or (2) both parties each put half of their pre-update budget into the funding gap. Point 2 is inferior because the other party knows, without even knowing me, that more likely than not my donation budget is much smaller than half the funding gap, and because the concept of the funding gap is subjective and unhelpful anyway. Point 1 should thus be the compromise point of which it is relatively obvious to both parties that is should be obvious to both parties. Hence I donated half my pre-update budget.
There’s probably a lot more game theory that can be done on refining this acausal moral trade strategy, but I think it’s pretty good already, probably better than the status quo without communication.
Here are four more things that I’m somewhat skeptical of and would like someone with more time on their hands and the right brain for the topic to see whether they hold water:
Evidential cooperation in large worlds is ridiculously underexplored considering that it might “solve ethics” as I like to epitomize it. AI safety is arguably more urgent, but maybe it can even inform that discipline in some ways. I have spent about a quarter of a year thinking about ECL, and have come away with the impression that I can almost ignore my own moral intuitions in favor of what little I think I can infer about the compromise utility function. More research is needed.
There is a tension between (1) the rather centralized approach that the EA community has traditionally taken and that is still popular, especially outside key organizations like CEA, and the pervasive failures of planned economies historically, and between (2) the much greater success of Hayakian approaches and the coordination that is necessary to avert catastrophic coordination failures that can end our civilization. My cofounders and I have started an EA org to experiment with market mechanisms for the provision of public and common goods, so we are quite desperate for more thinking of how we and EAs in general should resolve those tensions.
80k and others have amassed evidence that it’s best for hundreds or thousands of people to apply for each EA job, e.g., because the difference between the best and second best candidate are arguably large. I find this counterintuitive. Counterintuitive conclusions are interesting and the ones we’re likely to learn most from, but they are also more often than not wrong. In particular, my intuition is that, as a shallow heuristic, people will do more good if they focus on what is most neglected, all else equal. It seems suspicious that EA jobs should be an exception to this rule. I wonder whether it’s possible to make a case against along the lines of this argument, quantitatively trading off the expected difference between the top and the second best candidate against the risk of pushing someone (the second best candidate 0 to several hops removed) out of the EA community and into AI capabilities research (e.g., because they run out of financial runway), or simply by scrutinizing the studies that 80k’s research is based on.
I think some EAs are getting moral cooperation wrong. I’ve very often heard about instances of this but I can’t readily cite any. A fictional example is, “We can’t attend this workshop on inclusive workplace culture because it delays our work by one hour, which will cause us to lose out on converting 10^13 galaxies into hedonium because of the expansion of space.” This is, in my opinion, what it is like to get moral cooperation a bit less wrong. Obviously, all real examples will be less exaggerated, more subtle, and more defensible too.
A bit of a tangent, but:
That seems like it could be a defection in a moral trade, which is likely to burn gains of trade. Often you can just talk to the other funder and split 50:50 or use something awesome like the S-Process.
But I’ve been in the situation where I wanted to make a grant/donation (I was doing ETG), knew of the other donor, but couldn’t communicate with them because they were anonymous to me. Hence I resorted to a bit of proto-ECL: There are two obvious Schelling points, (1) both parties each fill half of the funding gap, or (2) both parties each put half of their pre-update budget into the funding gap. Point 2 is inferior because the other party knows, without even knowing me, that more likely than not my donation budget is much smaller than half the funding gap, and because the concept of the funding gap is subjective and unhelpful anyway. Point 1 should thus be the compromise point of which it is relatively obvious to both parties that is should be obvious to both parties. Hence I donated half my pre-update budget.
There’s probably a lot more game theory that can be done on refining this acausal moral trade strategy, but I think it’s pretty good already, probably better than the status quo without communication.