Gutting the FTT token is customers losing money because of their investing, not customer losses via FTX loss of custodial funds or token, though, isn’t it?
Davidmanheim
Alameda exile told Time that SBF “didn’t have a distinction between firm capital and trading capital. It was all one pool.” That’s at least a badge of fraud (commingling)
Alameda was a prop trading firm, so there isn’t normally any distinction between those. The only reason this didn’t apply was that there was a third bucket of funds, pass-through custodial funds that belonged to FTX customers, which they evidently didn’t pass through due to poor record keeping. That’s not as much indicative of fraud, it’s indicative of incompetance.
Yes, I see a strong argument for the claim that the companies are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices—but the other critical part is that this also changes incentives because liability insurers will demand the firms mitigate the risks. (And this is approaching the GCR argument, from a different side.)
I think that the use of insurance for moderate harms is often a commercial boondoggle for insurers, a la health insurance, which breaks incentives in many ways an leads to cost disease. And typical insurance regimes shift burden of proof about injury in damaging ways because insurers have deep pockets to deny claims in court and fight cases that establish precedents. I also don’t think that it matters for tail risks—unless explicitly mandating unlimited coverage, firms will have caps in the millions of dollars, and will ignore tail risks that will bankrupt them.
One way to address the tail, in place of strict liability, would be legislation allowing anticipated harms to be stopped via legal action, as opposed to my understanding that pursuing this type of prior restraint for uncertain harms isn’t possible in most domains.
I’d be interested in your thoughts on these points, as well as Cecil and Marie’s.
I would be interested in understanding whether you think that joint-and-several liability among model training, model developers, application developers, and users would address many of the criticisms you point out against civil liability. As I said last year, “joint-and-several liability for developers, application providers, and users for misuse, copyright violation, and illegal discrimination would be a useful initial band-aid; among other things, this provides motive for companies to help craft regulation to provide clear rules about what is needed to ensure on each party’s behalf that they will not be financially liable for a given use, or misuse.”
I also think that this helps mitigate the issue with fault-based liability in proving culpability, but I’m agnostic about which liability regime is justified.
Lastly, I think that your arguments mean that there’s good reason to develop a clear proposal for some new liability standard, perhaps including requirements for uncapped liability insurance for some specific portion of eventual damages, rather than assume that the dichotomy of strict vs. fault based is immutable.
If you find anyone who quotes that as an excuse where a modern Halachik authority would rule that they don’t have too much money for that to apply to them, I’ll agree they are just fine only giving 20%. (On the other hand, my personal conclusion is less generous.) But DINKs or single people making $100k+ each who comprise most of the earning to give crowd certainly don’t have the same excuse!
It was actually quoting the first bit; “The amount of charity one should give is that if you can afford to, give as much as is needed. Under ordinary circumstances, a fifth of one’s property is most laudable. To give one-tenth is normal. To give less than one-tenth is stingy.”
To ruin the joke, cf. Taanis 9a and even more, Yoreh Deah 249:
שיעור נתינתה אם ידו משגת יתן כפי צורך העניים ואם אין ידו משגת כל כך יתן עד חומש נכסיו מצוה מן המובחר ואחד מעשרה מדה בינונית פחות מכאן עין רעה
That’s much too Meta.
Steinsaltz: “development”—This is the mitzvah of “Tikkun Olam”
Commentaries on the Mishnah of Rabbi Ord:
Rashi: “three permissible cause areas”—cause areas, not fathers.
Tosfos: “cause areas”—If the mishna calls these cause areas not fathers, per [Rashi’s] notebooks, why does Rabbi bar bar Hana call them categories? Clearly, these must be categories. How, then, do we explain the words of the master [Rashi]? Perhaps the Mishna was careful not to use the word “category” because parent categories requires a listing of child categories, but the child categories are subject to an extensive dispute between charity evaluators. For this reason, it is clear that there are categories with subcategories, but the word categories is not used, thereby explaining Rashi’s note.
Ritva: “cause areas”—The answer of Tosfos does not explain Rashi’s words. Why is the dispute about subcategories enough to prevent the mishna from using exact language? Further, it is unclear which dispute Tosfos refers to, as there are disputes both about interventions, and individual charities. Instead, we see that cause areas are each a Father of Fathers [supercategory], and interventions are the fathers [categories], while specific donation opportunities are the children [subcategories].
Rashi: “long-termism”—the time required is explained in the Gemara.
Rashi: “global health”—The health of humans around the globe, not the globe itself.
Tosfos: “global health”—The sages of Greece and Rome have taught that the world is a ball.
Rashi: “Bal Tashchit”—as the verse says “you must not destroy its trees, wielding the ax against them”
Tosfos: “bal tashchit”—As explained in [Rashi’s] notebooks, this is referring to trees. How is it possible that animal welfare is included, but plant welfare, which is the source of the prohibition, is not? There are those who say that “animal” is not precise. This is difficult, because in that case it should have said “living thing welfare” It is brought in a Braitha that Brian Tomasik rules this way. However, this is a dispute with our Mishna, and does not resolve the contradiction.
Perhaps worth noting that very long term discounting is even more obviously wrong because of light-speed limits and the mass available to us that limits long term available wealth—at which point discounting should be based on polynomial growth (cubic) rather than exponential growth. And around 100,000-200,000 years, it gets far worse, once we’ve saturated the Milky Way.
The reason it seems reasonable to view the future 1,000,010 years as almost exactly as uncertain as 1,000,000 years is mostly myopia. To analogize, is the ground 1,000 miles west of me more or less uneven than the ground 10 miles west of me? Maybe, maybe not—but I have a better idea of what the near-surroundings are, so it seems more known. For the long term future, we don’t have much confidence in our projections of either a million or a million an ten years, but it seems hard to understand why all the relevant uncertainties will simply go away, other than simply not being able to have any degree of resolution due to distance. (Unless we’re extinct, in which case, yeah.)
To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)
I misunderstood, perhaps. Audit rates are primarily a function of funding—so marginal funding goes directly to those audits, because they are the most important. But if the US government wasn’t being insane, it would fund audits until the marginal societal cost of the audit was roughly equal to the income by the state.
The reason I thought this disagreed with you point is because I thought you were disagreeing with the earlier claim that “This is going to lead to billionaires’ actions being surveilled more and thus gone after for crimes more often than the average person. The reward makes it worth it. Billionaires will have far more legal/monetary resources and thus you should naively expect more settlements, particularly without an admission of wrongdoing.” This seems to be borne out by the model it seems we’re both agreeing with, that higher complexity audits are more worthwhile in terms of return.
And yes, I think that per person, the ultra-wealthy, rather than just the people making 200k+, are far more likely to be investigated and prosecuted for white collar crime, because they get more scrutiny, rather than based on whether or not they commit more crime, even though they are more likely to get off without a large punishment via lawyers. But the analysis looked at rates of guilt, not punishment sizes.
First, to your second point, I agree that they aren’t comparable, so I don’t want to respond to your discussion. I was not, in this specific post, arguing that anything about safety in the two domains is comparable. The claim, which you agree to in the final paragraph, is that there is an underlying fallacy which is present both places.
However, returning to your first tangential point, the claim that the acceleration versus deceleration debate is theoretical and academic seems hard to support. Domains where everyone is dedicated to minimizing regulation and going full speed ahead are vastly different than those where people agree that significant care is needed, and where there is significant regulation and public debate. You seem to explicitly admit exactly this when you say that nuclear power is very different than AI because of the “very high levels of anti-nuclear campaigning and risk aversion”—that is, public pressure against nuclear seemed to have stopped the metaphorical tide. So I’m confused about your beliefs here.
“The analogies establish almost nothing of importance about the behavior and workings of real AIs”
You seem to be saying that there is some alternative that establishes something about “real AIs,” but then you admit these real AIs don’t exist yet, and you’re discussing “expectations of the future” by proxy. I’d like to push back, and say that I think you’re not really proposing an alternative, or that to the extent you are, you’re not actually defending that alternative clearly.
I agree that arguing by analogy to discuss current LLM behavior is less useful than having a working theory of interpretability and LLM cognition—though we don’t have any such theory, as far as I can tell—but I have an even harder time understanding what you’re proposing is a superior way of discussing a future situation that isn’t amenable to that type of theoretical analysis, because we are trying to figure out where we do and do not share intuitions, and which models are or are not appropriate for describing the future technology. And I’m not seeing a gears level model proposed, and I’m not seeing concrete predictions.
Yes, arguing by analogy can certainly be slippery and confusing, and I think it would benefit from grounding in concrete predictions. And use of any specific base rates is deeply contentious, since reference classes are always debateable. But at least it’s clear what the argument is, since it’s an analogy. In opposition to that, arguing by direct appeal to your intuitions, where you claim your views are a “straightforward extrapolation of current trends” is being done without reference to your reasoning process. And that reasoning process, because it doesn’t have a explicit gears level model, is based on informal human reasoning and therefore, as Lakens argues, deeply rooted in metaphor anyways, seems worse—it’s reasoning by analogy with extra steps.
For example, what does “straightforward” convey, when you say “straightforward extrapolation”? Well, the intuition the words build on is that moving straight, as opposed to extrapolating exponentially or discontinuously, is better or simpler. Is that mode of prediction easier to justify than reasoning via analogies to other types of minds? I don’t know, but it’s not obvious, and dismissing one as analogy but seeing the other as “straightforward” seems confused.
I do too!
I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it’s certainly the case that when the biorisk is based on information security, it’s very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.
So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I’m not sure cybersecurity is a good analogy for biorisk, and have heard that computer security people often dislike the comparison of computer viruses and biological viruses for that reason, though they certainly share some features.