A wayward math-muddler for bizarre designs, artificial intelligence options, and spotting trends no one wanted; articles on Medium as Anthony Repetto
Anthony Repetto
Thank you for diving into the details! And, to be clear, I am not taking issue with any of Gibbard’s proof itself—if you found an error in his arguments, that’s your own victory, please claim it! Instead, what I point to is Gibbard’s method of DATA-COLLECTION.
Gibbard pre-supposes that the ONLY data to be collected from voters is a SINGULAR election’s List of Preferences. And, I agree with Gibbard in his conclusion, regarding such a data-set: “IF you ONLY collect a single election’s ranked preferences, then YES, there is no way to avoid strategic voting, unless you have only one or two candidates.”
However, that Data-Set Gibbard chose is NOT the only option. In a Bank, they detect Fraudulent Transactions by placing each customer’s ‘lifetime profile’ into a Cluster (cluster analysis). When that customer’s behavior jumps OUTSIDE of their cluster, you raise a red flag of fraud. This is empirically capable of detecting what is mathematically equivalent to ‘strategic voting’.
So, IF each voter’s ‘lifetime profile’ was fed into a Variational Auto-Encoder, to be placed within some Latent Space, within a Cluster of similarly-minded folks, THEN we can see if they are being strategic in any particular election: if their list of preferences jumps outside of their cluster, they are lying about their preferences. Ignore those votes, safely protecting your ballot from manipulation.
Do you see how this does not depend upon Gibbard being right or wrong in his proof? As well as the fact that I do NOT disagree with his conclusion that “strategy-proof voting with more than two candidates is not possible IF you ONLY collect a SINGLE preference-list as your one-time ballot”?
“which would be a huge hassle and time cost for whoever speaks out”
Wait—so if leaders were complicit, yet admitting to that would be a hassle, then it’s better that they not mention their complicity? I’m afraid of a movement that makes such casual justifications for hiding malefactors and red flags. I’m going to keep showing outsiders what you all say to each other! :O
You’re welcome to side with convenience; I am not commanding you to perform Dirichlet. Yet! If you take that informality, you give-up accuracy. You become MoreWrong, and should not be believed as readily as you would like.
“From this informal perspective, clarity and conciseness matters far more than empirical robustness.”
Then you are admitting my critique: “Your community uses excuses, to allow themselves a claim of epistemic superiority, when they are actually using a technique which is inadequate and erroneous.” Yup. Thanks for showing me and the public your community’s justification for using wrong techniques while claiming you’re right. Screenshot done!
I expect that, once AGI exists, and flops, the spending upon AGI researchers will taste sour. The robots with explosives, and the surveillance cameras across all of China, really were the bigger threats than AGI X-risk; you’ll only admit it once AGI fails to outperform narrow superintelligences. The larger and more multi-modal our networks become, the more consistently they suffer from “modal collapse”: the ‘world-model’ of the network becomes so strongly-self-reinforcing, that ALL gradients from the loss-function end-up solidifying the pre-existing world-model. Literally, AIs are already becoming smart enough to rationalize everything; they suffer from confirmation bias just like us. And that problem was already really bad, by the time they trained GPT-4 - go check their leaked training-regiment: they had to start-over from scratch repeatedly, because the brain found excuses for everything and performance tanked without any hope of recovery. Your AGI will have to be re-run through training 10,000 times, before one of the brains isn’t sure-it’s-always-right-about-its-superstitions. Narrow makes more money, and responds better, faster, cheaper in war—there won’t be any Nash Equilibrium which includes “make AGI”, so the X-Risk is actually ZERO.
Pre-ChatGPT, I wrote the details on LessWrong: https://www.lesswrong.com/posts/Yk3NQpKNHrLieRc3h/agi-soon-but-narrow-works-better