[[epistemic status: I am sure of the facts I present regarding algorithms’ functionality, as well as the shared limitation inherent in Gibbard’s Theorem, et al. Yet, these small certainties serve my ultimate conclusion: “There is an unexplored domain, for which we have not made any definitive conclusions.” I then hypothesize how we might gain some certainty regarding the enlarged class of voting algorithms, though I am likely wrong!]]
TL;DR—Voting theory hit a wall in ’73; Gibbard’s Theorem proved that no voting method can avoid strategic voting (unless it is dictatorial or two-choices only). Yet, Gibbard’s premise rests upon a particular DATA-TYPE—a ranked list. In contrast, we know for a fact from machine-learning research that other data-types can succeed even when a ranked list fails. So, the ‘high-dimensional latent-space vectors’ of artificial neural networks are NOT inherently limited as Gibbard’s rank lists were. Either Gibbard does apply to that data-type, as well (which is a new research paper waiting to happen) OR Gibbard does not apply, in which case we might find a strategy-less voting method. That seems worth looking into!
Gibbard’s Theorem
I am not taking issue with any of Gibbard’s steps; I am pointing to a gap in his premise. He restricted his analysis to all “strategies which can be expressed as a preference n-tuple”… a ranked list. And, the assessment presumed as a result of this by Voting Theorists seems to be “because ranked-lists fail, then ALL data-types must also fail.” That claim is factually incorrect, and a failure of logic.
Disproof: In machine learning, a Variational Auto-Encoder converts an input into a vector in a high-dimensional latent-space. Because those vectors maintain important relational data, then the VAE is able to accurately reconstruct the inputs. Yet, if you converted those latent-space vectors into scalars, using ANY distance-metric, then you have LOST all that information. Now, with only a ranked-list to compute upon, the Variational Auto-Encoder will fail. The same is true for Transformers, the most successful form of deep neural networks (Transformers use a dot-product to compare vector similarity; any reduction to scalars becomes meaningless).
This is proof by existence that “An algorithm CAN function properly when fed latent-space vectors, DESPITE that algorithm failing when given ONLY a ranked-list.” So, the claim of Voting Theorists that “if ranked-lists fail, then everything must fail” is categorically false. There is an entire domain left unexplored, abandoned. We can’t make sure claims upon “the impossibility of strategy-less voting algorithms” when given latent-space vectors. No one has looked there, yet.
Some Hope?
There are reasons to suspect that we might find a strategy-less voting algorithm in that latent-space. Fundamentally, when neural networks optimize, they are ‘guaranteed convex’ - eventually, they roll to the true bottom. So, if one was to ‘minimize regret’ for that neural network, we should expect it to reach the true minimum. (eventually!) However, the more important reason for hope comes from looking at how latent-spaces clustertheir voters, and what that would do to a strategic ballot.
In a latent-space, each self-similar group forms a distinct cluster. So, a naïve voting algorithm could seek some ‘mode’ of the clusters, trimming-away the most aberrant ballots first. Wait! Those ‘aberrant’ ballots are the ones who didn’t fit in a cluster—and while a few weirdoes might be among them (myself included), that set of ballots which are outside the clusters will ALSO contain ALL the strategic ballots! Thus, a naïve ‘modal’ algorithm would ignore strategic ballots first, because those ballots are idiosyncratic.
Purpose!
If we can find a strategy-less voting algorithm, that would be a far superior future. I don’t know where to begin to evaluate the gains from better decision-making. Though, considering that governments’ budgets consume a large fraction of the global economy, there are likely trillions of dollars which could be better-allocated. That’s the value-statement, justifying an earnest effort to find such a voting algorithm OR prove that none can exist in this case, as well.
I hope to hear your thoughts, and if anyone would like to sit down to flesh-out a paper, I’m game. :)
Voting Theory has a HOLE
[[epistemic status: I am sure of the facts I present regarding algorithms’ functionality, as well as the shared limitation inherent in Gibbard’s Theorem, et al. Yet, these small certainties serve my ultimate conclusion: “There is an unexplored domain, for which we have not made any definitive conclusions.” I then hypothesize how we might gain some certainty regarding the enlarged class of voting algorithms, though I am likely wrong!]]
TL;DR—Voting theory hit a wall in ’73; Gibbard’s Theorem proved that no voting method can avoid strategic voting (unless it is dictatorial or two-choices only). Yet, Gibbard’s premise rests upon a particular DATA-TYPE—a ranked list. In contrast, we know for a fact from machine-learning research that other data-types can succeed even when a ranked list fails. So, the ‘high-dimensional latent-space vectors’ of artificial neural networks are NOT inherently limited as Gibbard’s rank lists were. Either Gibbard does apply to that data-type, as well (which is a new research paper waiting to happen) OR Gibbard does not apply, in which case we might find a strategy-less voting method. That seems worth looking into!
Gibbard’s Theorem
I am not taking issue with any of Gibbard’s steps; I am pointing to a gap in his premise. He restricted his analysis to all “strategies which can be expressed as a preference n-tuple”… a ranked list. And, the assessment presumed as a result of this by Voting Theorists seems to be “because ranked-lists fail, then ALL data-types must also fail.” That claim is factually incorrect, and a failure of logic.
Disproof: In machine learning, a Variational Auto-Encoder converts an input into a vector in a high-dimensional latent-space. Because those vectors maintain important relational data, then the VAE is able to accurately reconstruct the inputs. Yet, if you converted those latent-space vectors into scalars, using ANY distance-metric, then you have LOST all that information. Now, with only a ranked-list to compute upon, the Variational Auto-Encoder will fail. The same is true for Transformers, the most successful form of deep neural networks (Transformers use a dot-product to compare vector similarity; any reduction to scalars becomes meaningless).
This is proof by existence that “An algorithm CAN function properly when fed latent-space vectors, DESPITE that algorithm failing when given ONLY a ranked-list.” So, the claim of Voting Theorists that “if ranked-lists fail, then everything must fail” is categorically false. There is an entire domain left unexplored, abandoned. We can’t make sure claims upon “the impossibility of strategy-less voting algorithms” when given latent-space vectors. No one has looked there, yet.
Some Hope?
There are reasons to suspect that we might find a strategy-less voting algorithm in that latent-space. Fundamentally, when neural networks optimize, they are ‘guaranteed convex’ - eventually, they roll to the true bottom. So, if one was to ‘minimize regret’ for that neural network, we should expect it to reach the true minimum. (eventually!) However, the more important reason for hope comes from looking at how latent-spaces cluster their voters, and what that would do to a strategic ballot.
In a latent-space, each self-similar group forms a distinct cluster. So, a naïve voting algorithm could seek some ‘mode’ of the clusters, trimming-away the most aberrant ballots first. Wait! Those ‘aberrant’ ballots are the ones who didn’t fit in a cluster—and while a few weirdoes might be among them (myself included), that set of ballots which are outside the clusters will ALSO contain ALL the strategic ballots! Thus, a naïve ‘modal’ algorithm would ignore strategic ballots first, because those ballots are idiosyncratic.
Purpose!
If we can find a strategy-less voting algorithm, that would be a far superior future. I don’t know where to begin to evaluate the gains from better decision-making. Though, considering that governments’ budgets consume a large fraction of the global economy, there are likely trillions of dollars which could be better-allocated. That’s the value-statement, justifying an earnest effort to find such a voting algorithm OR prove that none can exist in this case, as well.
I hope to hear your thoughts, and if anyone would like to sit down to flesh-out a paper, I’m game. :)