I agree that we’re not currently good at “maximizing welfare,” but I worry that futarchy would lead to issues stemming from over-optimization of a measure that is misaligned from what we actually want. In other words, my worry is that common sense barriers would be removed under futarchy (or we would lose sight of what we actually care about after outlining an explicit welfare measure), and we would over-optimize whatever is outlined in our measure of welfare, which is never going to be perfectly aligned to our actual needs/desires/values.
This is a version of Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” (Or possibly Campbell’s law, which is more specific.)
I agree that we’re not currently good at “maximizing welfare,” but I worry that futarchy would lead to issues stemming from over-optimization of a measure that is misaligned from what we actually want. In other words, my worry is that common sense barriers would be removed under futarchy (or we would lose sight of what we actually care about after outlining an explicit welfare measure), and we would over-optimize whatever is outlined in our measure of welfare, which is never going to be perfectly aligned to our actual needs/desires/values.
This is a version of Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” (Or possibly Campbell’s law, which is more specific.)