I am confused why the title of this post is: “The biggest risk of free-spending EA is not optics or epistemics, but grift” (emphasis added). As Zvi talks about extensively in his moral mazes sequence, the biggest problems with moral mazes and grifters is that many of their incentives actively point away from truth-seeking behavior and towards trying to create confusing environments in which it is hard to tell who is doing real work and who is not. If it was just the case that a population of 50% grifters and 50% non-grifters would be half as efficient as a population of 0% grifters and 100% non-grifters, that wouldn’t be that much of an issue. The problem is that a population of 50% grifters and 50% non-grifters probably has approximately zero ability to get anything done, or react to crises, and practically everyone within that group (including the non-grifters) will have terrible models of the world.
I don’t think it’s that bad if we end up wasting a lot of resources, compared to what I think is the more likely outcome, which is that the presence of grifters will deteriorate our ability to get accurate information about the world, and build accurate shared models of the world. The key problem is epistemics, and I feel like your post makes that point pretty well, but then it has a title that actively contradicts that point, which feels confusing to me.
Sorry that was confusing! I was attempting to distinguish:
Direct epistemic problems: money causes well-intentioned people to have motivated cognition etc. (the downside flagged by the “optics and epistemics” post)
Indirect epistemic problems as a result of the system’s info processing being blocked by not-well-intentioned people
I am confused why the title of this post is: “The biggest risk of free-spending EA is not optics or epistemics, but grift” (emphasis added). As Zvi talks about extensively in his moral mazes sequence, the biggest problems with moral mazes and grifters is that many of their incentives actively point away from truth-seeking behavior and towards trying to create confusing environments in which it is hard to tell who is doing real work and who is not. If it was just the case that a population of 50% grifters and 50% non-grifters would be half as efficient as a population of 0% grifters and 100% non-grifters, that wouldn’t be that much of an issue. The problem is that a population of 50% grifters and 50% non-grifters probably has approximately zero ability to get anything done, or react to crises, and practically everyone within that group (including the non-grifters) will have terrible models of the world.
I don’t think it’s that bad if we end up wasting a lot of resources, compared to what I think is the more likely outcome, which is that the presence of grifters will deteriorate our ability to get accurate information about the world, and build accurate shared models of the world. The key problem is epistemics, and I feel like your post makes that point pretty well, but then it has a title that actively contradicts that point, which feels confusing to me.
Sorry that was confusing! I was attempting to distinguish:
Direct epistemic problems: money causes well-intentioned people to have motivated cognition etc. (the downside flagged by the “optics and epistemics” post)
Indirect epistemic problems as a result of the system’s info processing being blocked by not-well-intentioned people
I will try to think of a better title!
Ah, yes, the new title seems better. Thanks for writing this!