Also, the only known raids on the corporate assets happened post-crash and therefore long post-audit. Under the espoused worldview of the management, everything before that was plausibly ‘good for the company’. In that it benefitted the company in raw EV across all possible worlds with no discount rate for higher gains or for massive losses.
Czynski
That wasn’t the question. The question was why any company would go to less-than-maximally-trustworthy auditors.
And it makes you wonder why companies would go to these known-worse-auditors, especially if they can afford the best auditing like FTX should have been able to, if they don’t have something to hide.
Complying with an audit is expensive, and not just in money.
A thorough audit in progress is going to disrupt the workflow of all or most of your company in order to look at their daily operations more closely. This reduces productivity and slows down the ability to change anything, even if nothing improper is happening. It is expensive and disruptive.
A thorough audit is also going to recommend changes. Not just changes required to be technically in compliance, but ones which will make it easier to audit for compliance in the future and ones which remove something that could potentially be mistaken for bad behavior in a dim light. Making those changes is expensive and disruptive.
If you don’t need extremely high levels of trust from your customers and partners, choosing to receive a thorough audit means you’re paying a bunch of unnecessary costs. Much better to get a more lax audit, which is less disruptive to have ongoing and less disruptive to handle once the results are in. Better still if it also costs less money.
The correct audit is the one that provides your customers and clients—and/or your own management—with exactly as much trust and reassurance as you need them to get and no more. Anything less and you lose business that doesn’t trust you; anything more and you’re paying a cost for a benefit you don’t actually benefit from.
Simple: It’s another meta thing. Those have a very poor track record and seem to require extraordinary competence to be net-positive.
That’s literally just the same thing I said with more words. They don’t have reasons to think finance is net negative, it just is polluted with money and therefore bad.
Those two are perfectly good examples. They did. Every successful startup does something approximately that bad, on the way to the top.
Because finance people are bad people and therefore anything associated with them is bad. Or for a slightly larger chain, because money is bad, people who spend their lives seeking money are therefore bad, and anything associated with those people is bad.
Don’t overthink this. It doesn’t have to make sense, there just have to be a lot of people who think it does.
Why wouldn’t it be controversial? It suggests something other than people acting according to their personal pet projects, ideologies, and social affiliations, and proposes a way by which those can be compared and found wanting. The fact that it also comes with significantly more demandingness than anything else just makes it a stronger implicit attack.
Most people will read EA as a claim to the moral high ground, regardless of how nicely it’s presented to them. Largely because it basically is one. Implicit in all claims to the moral high ground—even if it’s never stated and even if it’s directly denied—is the corollary claim that their claims to the moral high ground are lesser or even invalid. Which is a claim of superiority.
That will produce defensiveness and hostility by default.
Many people’s livelihoods depend on ineffective charity, of course, and Sinclair’s Rule is also a factor. But it’s a minor one. The main factor is that the premise of EA is that charity should be purchasing utilons. And even starting to consider that premise makes a lot of people tacitly realize that their political and charitable work may have been purchasing warm fuzzies, which is an unpleasant feeling that they are motivated to push back against to protect their self-image as a do-gooder/good person/etc.
Of course, there is no need for contradiction. You can purchase both utilons and warm fuzzies, so long as you do it separately. But in my estimation, no more than 5% of the world, at the absolute most, is amenable to buying warm fuzzies and utilons separately. (More likely it’s less than 0.5%.) The other 95% will either halt, catch fire, and reorient their internal moral compass, or, much more commonly, get outraged that you dared to pressure them to do that. (Whether you actually applied any pressure is basically immaterial.)
No, you’re thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn’t be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse. It was definitely not a reliable indicator that a fraud or scandal was coming down the road.
C, Neither. The obvious interpretation is exactly what he said—people ultimately don’t care whether you maintained their standard of ‘ethical’ as long as you win. Which means that as far as talking about other people’s ethics, it’s all PR, regardless of how ethical you’re being by your own standards.
(I basically concur. Success earns massive amounts of social capital, and that social capital can buy a whole lot of forgiveness. Whether it also comes with literal capital which literally buys forgiveness is almost immaterial next to that.)
So he’s said essentially nothing about his own ethics and whether he believes he stuck to them. Later elaboration strongly suggests he considered his actions ‘sketchy’ but doesn’t even say that outright. This is entirely consistent with SBF believing that he never did anything wrong on purpose.
Whether you think that belief is true, false but reasonable, or totally delusionary, is a separate matter. Just based on this interview I’d say “false but reasonable”, but there’s a lot of unsubstantiated claims of a history of lying that I haven’t evaluated.
Again, that’s orthogonal to the actual problems that surfaced.
Yeah, still not seeing much good faith. You’re still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.
Strongly disagree. That criticism is mostly orthogonal to the actual problems that surfaced. Conflicts of interest were not the problem here.
Most of that isn’t even clearly bad, and I find it hard to see good faith here.
Your criticism of Binance amounts to “it’s cryptocurrency”. Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis—Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn’t, but they were warned, they just didn’t heed the warnings. Should we also refuse to take money from anyone who bets against r/wallstreetbets and Robinhood? Casinos? Anyone who runs a platform for sports bets? Prediction markets? Your logic would condemn them all.
It’s not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
FTX would prefer that the crypto sector stay healthy, and backstopping companies whose schemes were failing serves that goal. That is an entirely sufficient explanation and one with no clear ethical issues or moral hazard.
Even in retrospect, I think this was bad criticism and it was correct to downvote it.
The ‘unambitious’ thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn’t. Even ordinary technological change creates worldwide political change at that scale!
And an AGI having that little impact is also not plausible; if that’s all you do, the second mover—and possibly the third, fourth, fifth, if everyone moves slow—spits out an AGI and flips the table, because you can’t be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actors won’t. Even if they are approximately as unambitious, they will have different goals, and the interaction will immediately amp up the chaos.
There is just no way for an actual AGI scenario to meet these guidelines. Any attempt to draw a world which meets them has written the bottom line first and is torturing its logic trying to construct a vaguely plausible story that might lead to it.
Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn’t matter here, as even that scenario would cause massive disruption.
If humans totally solve alignment, we’d probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch.
Directly conflicts with the geopolitical requirements. Also not compatible with the ‘sector by sector’ scope of economic impact—an AGI would be revolutionizing everything at once, and the only question would be whether it was merely flipping the figurative table or going directly to interpolating every figurative chemical bond in the table with figurative gas simultaneously and leaving it to crumble into figurative dust.
Otherwise you’d be left with three options that all seem immoral
The ‘Silent elitism’ view is approximately correct, except in its assumption that there is a current elite who endorse the eutopia, which there is not. Even the most forward-thinking people of today, the Ben Franklins of the 2020s, would balk. The only way humans know how to transition toward a eutopia is slowly over generations. Since this has a substantial cost, speedrunning that transition is desirable, but how exactly that speedrun can be accomplished without leaving a lot of wreckage in its wake is a topic best left for superintelligences, or at the very least intelligences augmented somewhat beyond the best capabilities we currently have available.
Pure propaganda—instead of trying to make a description that’s an honest attempt at translating a strange future into something that ordinary people can understand, we give up all attempts at honesty and just make up a nice-sounding future with no resemblance to the Eutopia which is secretly our true destination.
What a coincidence! You have precisely described this contest. This is, explicitly, a “make up a nice-sounding future with no resemblance to our true destination” contest. And yes, it’s at best completely immoral. At worst they get high on their own supply and use it to set priorities, in which case it’s dangerous and aims us toward UFAI and impossibilities.
At least it’s not the kind of believing absurdities which produces people willing to commit atrocities in service of those beliefs. Unfortunately, poor understanding of alignment creates a lot of atrocities from minimal provocation anyway.
the closest possible description of the indescribable Eutopia must be something that sounds basically good (even if it is clearly also a little unfamiliar), because the fundamental idea of Eutopia is that it’s desirable
This is not true. There is no law of the universe which states that there must be a way to translate the ways in which a state good for its inhabitants (who are transhuman or posthuman i.e possessed of humanity and other various important mental qualities), into words which can be conveyed in present human language, by text or speech, that sound appealing. That might be a nice property for a universe to have but ours doesn’t.
Some point along a continuum from here to there, a continuum we might slide up or down with effort, probably can be so described—a fixed-point theorem of some sort probably applies. However, that need not be an honest depiction of what life will be like if we slide in that direction, any more than showing a vision of the Paris Commune to a Parisien on the day Napoleon fell (stipulating that they approved of it) would be an honest view of Paris’s future.
“Necessarily entails singularity or catastrophe”, while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate “transforming the world sector by sector”. An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years in.
An AGI which has any ability to self-improve will not wait that long. It will be months, not years, and probably weeks, not months. A ‘soft’ takeoff would still be faster than five years. These rules mandate not a soft takeoff, but no takeoff at all.
Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn’t see anything like this. You’d see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.
This scenario is “no takeoff at all”—advancement happening only at the speed of economic growth.
The ‘stylistic choices’ were themselves evidence of wrongdoing, and most of their evidence against claims both misstated the claims they claimed to be refuting and provided further (unwitting?) evidence of wrongdoing.