A wayward math-muddler for bizarre designs, artificial intelligence options, and spotting trends no one wanted; articles on Medium as Anthony Repetto
Anthony Repetto
Beyond Meta: Large Concept Models Will Win
Thank you for diving into the details! And, to be clear, I am not taking issue with any of Gibbard’s proof itself—if you found an error in his arguments, that’s your own victory, please claim it! Instead, what I point to is Gibbard’s method of DATA-COLLECTION.
Gibbard pre-supposes that the ONLY data to be collected from voters is a SINGULAR election’s List of Preferences. And, I agree with Gibbard in his conclusion, regarding such a data-set: “IF you ONLY collect a single election’s ranked preferences, then YES, there is no way to avoid strategic voting, unless you have only one or two candidates.”
However, that Data-Set Gibbard chose is NOT the only option. In a Bank, they detect Fraudulent Transactions by placing each customer’s ‘lifetime profile’ into a Cluster (cluster analysis). When that customer’s behavior jumps OUTSIDE of their cluster, you raise a red flag of fraud. This is empirically capable of detecting what is mathematically equivalent to ‘strategic voting’.
So, IF each voter’s ‘lifetime profile’ was fed into a Variational Auto-Encoder, to be placed within some Latent Space, within a Cluster of similarly-minded folks, THEN we can see if they are being strategic in any particular election: if their list of preferences jumps outside of their cluster, they are lying about their preferences. Ignore those votes, safely protecting your ballot from manipulation.
Do you see how this does not depend upon Gibbard being right or wrong in his proof? As well as the fact that I do NOT disagree with his conclusion that “strategy-proof voting with more than two candidates is not possible IF you ONLY collect a SINGLE preference-list as your one-time ballot”?
“which would be a huge hassle and time cost for whoever speaks out”
Wait—so if leaders were complicit, yet admitting to that would be a hassle, then it’s better that they not mention their complicity? I’m afraid of a movement that makes such casual justifications for hiding malefactors and red flags. I’m going to keep showing outsiders what you all say to each other! :O
You’re welcome to side with convenience; I am not commanding you to perform Dirichlet. Yet! If you take that informality, you give-up accuracy. You become MoreWrong, and should not be believed as readily as you would like.
“From this informal perspective, clarity and conciseness matters far more than empirical robustness.”
Then you are admitting my critique: “Your community uses excuses, to allow themselves a claim of epistemic superiority, when they are actually using a technique which is inadequate and erroneous.” Yup. Thanks for showing me and the public your community’s justification for using wrong techniques while claiming you’re right. Screenshot done!
Bayes is Out-Dated, and You’re Doing it Wrong
Alistair, I regret to inform you that after four years of Leverage’s Anti-Avoidance Training, the cancer has spread: the EA Community at large is now repeatedly aghast that outsiders are noticing their subtle rug-sweeping of sexual harassment and dismissal of outside critique. In barely a decade, the self-described rats are swum ‘round a stinking sh!p. I’m still amazed that, for the last year, as I kept bringing-forth concerns and issues, the EA members each insisted ‘no problems here, no, never, we’re always so perfect....’ Yep. It shows.
This aged well… and it reads like what ChatGPT would blurt, if you asked it to “sound like a convincingly respectful and calm cult with no real output.” Your ‘Anti-Avoidance,’ in particular, is deliciously Orwellian. “You’re just avoiding the truth, you’re just confused...”
I was advocating algal and fish farming, including bubbling air into the water and sopping-up the fish poop with crabs and bivalves—back in 2003. Spent a few years trying to tell any marine biologist I could. Fish farming took-off, years later, and recently they realized you should bubble air and catch the poop! I consider that a greater real-world accomplishment than your ‘training 60+ people on anti-avoidance of our pseudo-research.’ Could you be more specific about Connection Theory, and the experimental design of the research you conducted and pre-registered, to determine that it was correct? I’m sure you’d have to get into some causality-weeds, so those experimental designs are going to be top-notch, right? Or, is it just Geoff writing with the rigor of Freud on a Slack he deleted?
Third Generation Bay Area, here—and, if you aren’t going to college at Berkeley or swirling in the small cliques of SF among 800,000 people living there, yeah, not a lot of polycules. I remember when Occupy oozed its way through here that left a residue of ‘say-anything-polyamorists’ who were excited to share their ‘pick-up artist’ techniques when only other men where present. “Gurus abuse naïve hopefuls for sex” has been a recurring theme of the Bay, every few decades, but the locals don’t buy it.
I am terrified that you were downvoted to obscurity. These posts, the ones that EA hides, are the ones the public needs to see the most.
I am terrified that you were so thoroughly downvoted… “EA only wants to hear shallow critiques, not deep ones” seems to be happening vigorously, still.
It’s a bad sign that you were being downvoted! I gave you my upvote!
Another wonderful example of “so simple, why didn’t anyone try it before” just this week:
Robert Murray-Smith’s wind generators seem to have a Levelized Cost comparable to the big turbines, yet simple and cheap, redundant:
Downvote Outsiders’ Critiques, to Prove Our Bias
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try “taking the hypothesis that EA...” and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
continuing my response:
When Gregory Lewis said to you that “If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is ‘too high’ is neither here nor there: you still have to meet it else they will keep ignoring you.” He is arguing an ultimatum: “if we’re dysfunctional, then you still have to bow to our dysfunction, or we get to ignore you.” That has no standing in epistemics, and it is a bad-faith argument. If he were to suppose his organization’s dysfunction with the probability with which he askes you to doubt your own work, he would realize that “you gotta toe the line, even if our ‘bar’ is nonsense” is just nonsense! Under the circumstance where they are dysfunctional, Gregory Lewis is lounging in it!
The worst part is that, once their fallacies and off-hand dismissals are pointed-out to them, when they give no real refutation, they just go silent. It’s bizarre, that they think they are behaving in a healthy, rational way. I suspect that many of them aren’t as competent as they hope, and they need to hide that fact by avoiding real analysis. I’d be glad to talk to any Ai Safety folks in the Bay, myself—I’d been asking them since December of last year. When I presented my arguments, they waved-away without refutation, just as they have done to you.
Thank you for speaking up, even as they again cast doubt: where Gregory Lewis supposed that the way to find truth was that “We could litigate which is more likely—or, better, find what the ideal ‘bar’ insiders should have on when to look into outsider/heterodox/whatever work, and see whether what has been presented so far gets far enough along the ?crackpot/?genius spectrum to warrant the consultation” He entirely ignores the proper ‘bar’ for new ideas: consideration of the details, and refutation of those details. If refutation cannot be done by them, then they have no defense against your arguments! Yet, they claim such a circumstance is their victory, by supposing that some ‘bar’ of opinion-mongering should decide a worthy thought. This forum is very clearly defending its ‘tuft’ from outsiders; the ‘community’ here in the Bay Area is similarly cliquish, blacklisting members and then hiding that fact from prospective members and donors.
I expect that, once AGI exists, and flops, the spending upon AGI researchers will taste sour. The robots with explosives, and the surveillance cameras across all of China, really were the bigger threats than AGI X-risk; you’ll only admit it once AGI fails to outperform narrow superintelligences. The larger and more multi-modal our networks become, the more consistently they suffer from “modal collapse”: the ‘world-model’ of the network becomes so strongly-self-reinforcing, that ALL gradients from the loss-function end-up solidifying the pre-existing world-model. Literally, AIs are already becoming smart enough to rationalize everything; they suffer from confirmation bias just like us. And that problem was already really bad, by the time they trained GPT-4 - go check their leaked training-regiment: they had to start-over from scratch repeatedly, because the brain found excuses for everything and performance tanked without any hope of recovery. Your AGI will have to be re-run through training 10,000 times, before one of the brains isn’t sure-it’s-always-right-about-its-superstitions. Narrow makes more money, and responds better, faster, cheaper in war—there won’t be any Nash Equilibrium which includes “make AGI”, so the X-Risk is actually ZERO.
Pre-ChatGPT, I wrote the details on LessWrong: https://www.lesswrong.com/posts/Yk3NQpKNHrLieRc3h/agi-soon-but-narrow-works-better