My point is that, in the real world, stealing money does not serve the goal of increasing funding for your cause in expectation.
Why not? What if we can generate tens of billions of dollars through fraudulent means? We can buy a lot of utility with that money, after all. Perhaps even save humanity from the brink of extinction.
And what if we think we have a fairly good reason to think that we will get away with it. Surely the expected value would start to look pretty enticing by then, no?
Frankly, I’d like to see your calculations. If you really believe that SBF’s fraud did not have net positive value in expectation, then prove it. Do the math for us. At what point does the risk no longer become acceptable, and how much money would it take to offset that risk?
Do you know? Have you run the calculations? Or do you just have faith that the value in expectation will be net negative? Because right now I’m not seeing calculations. I am just seeing unsubstantiated assertions.
But I’ll throw you a bone. For the sake of argument, let’s suppose that you crunch the numbers so that the math conveniently works out. Nice.
But is this math consistent with Nick Bostrom’s math, or Will’s math? Is it consistent with the view that 100 dollars donated to AI safety is worth one trillion human lives? Or that every second of delayed technological development is just as bad as the untimely death of 10^29 people?
On the face of it, it seems extremely improbable that this math could be consistent. Because what if Sam was very likely to get away with it, but just got unlucky? Alternatively, what if the risks are higher but SBF had the potential to become the world’s first trillionaire? Would that change things?
If it does, then this math seems flimsy. So, if we want to reject this flimsiness, then we need to say that Will or Nick’s math is wrong.
But here’s the rub: SBF could have calibrated his own decision theoretic musings to the tune of Nick and Will’s, no? And if he did, that would suggest that Nick and/or Will’s math is dangerous, would it not? And if their math is dangerous, that means that there must be something wrong with EA’s messaging. So perhaps it’s the case that EA—and EV thinking in general—does, in fact, bear some responsibility for this mess.
This brings us to your edit:
the EA message is not the problem.
Care to elaborate on this point?
How do you know this? Are you sure that the EA message is not the problem? What does it mean to say that a message is a ‘problem’, in this case? Would the EA message be a problem if it were true that hadEA never existed, then SBF would never have committed massive financial fraud?
Because this counterfactual claim seems very likely to be correct. (See this Bloomberg article here.) So this would seem to suggest that EA is part of the problem, no?
Because, surely, if EA is causally responsible for this whole debacle, then “the EA message” is at least part of the problem. Or do you disagree?
If you do disagree, then: What does it mean, in your view, for something to be a “problem”? And what exactly would it take for “the EA message” to be “the problem”?
And last, but certainly not least: Is there anything at all that could convince you that EV reasoning is not infallible?
“What if...? Have you run the calculations? … On the face of it...”
Did you even read the OP? Your comments amount to nothing more than “But my naive utilitarian calculations suggest that these bad acts could really easily be justified after all!” Which is simply non-responsive to the arguments against naive utilitarianism.
I’m not going to repeat the whole OP in response to your comments. You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions. But that’s precisely the mistake that the OP is arguing against.
Is there anything at all that could convince you that EV reasoning is not infallible?
This is backwards. You are the one repeatedly invoking naive “EV reasoning” (i.e. calculations) as supposedly the true measure of expected value. I’m arguing that true expected value is best approximated when constrained by reliable heuristics.
If you do disagree, then: What does it mean, in your view, for something to be a “problem”?
I mean for it to be false, unjustified, and something we should vociferously warn people against. Not every causal contribution to a bad outcome is a “problem” in this sense. Oxygen also causally contributed to every bad action by a human—without oxygen, the bad act would not have been committed. Even so, oxygen is not the problem.
You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions
Where did I say this?
I’m not going to repeat the whole OP in response to your comments.
You’re assuming that you responded to my question in the original post. But you didn’t. Your post just says “trust me guys, the math checks out”. But I see no math. So where did you get this from?
I’m arguing that true expected value is best approximated when constrained by reliable heuristics.
“Arguing”? Or asserting?
If these are arguments, they are not quite strong. No one outside of EA is convinced by this post. I’m not sure if you saw, but this post has even become the subject of ridicule on Twitter.
Not every causal contribution to a bad outcome is a “problem” in this sense. Oxygen also causally contributed to every bad action by a human—without oxygen, the bad act would not have been committed.
Okay, I didn’t realize we were going back to PHIL 101 here. If you need me to spell this out explicitly: SBF chose his career choice because he was encouraged by prominent EA leaders to earn to give. Without EA, he would have never had the means to start FTX. The-earn-to-give model encourages shady business practices.
The connection is obvious.
Saying this has nothing to do with EA is like saying the Stalin’s governance had nothing to do with Marxism.
Denying the link is delusional and makes us look like a cult.
Why not? What if we can generate tens of billions of dollars through fraudulent means? We can buy a lot of utility with that money, after all. Perhaps even save humanity from the brink of extinction.
And what if we think we have a fairly good reason to think that we will get away with it. Surely the expected value would start to look pretty enticing by then, no?
Frankly, I’d like to see your calculations. If you really believe that SBF’s fraud did not have net positive value in expectation, then prove it. Do the math for us. At what point does the risk no longer become acceptable, and how much money would it take to offset that risk?
Do you know? Have you run the calculations? Or do you just have faith that the value in expectation will be net negative? Because right now I’m not seeing calculations. I am just seeing unsubstantiated assertions.
But I’ll throw you a bone. For the sake of argument, let’s suppose that you crunch the numbers so that the math conveniently works out. Nice.
But is this math consistent with Nick Bostrom’s math, or Will’s math? Is it consistent with the view that 100 dollars donated to AI safety is worth one trillion human lives? Or that every second of delayed technological development is just as bad as the untimely death of 10^29 people?
On the face of it, it seems extremely improbable that this math could be consistent. Because what if Sam was very likely to get away with it, but just got unlucky? Alternatively, what if the risks are higher but SBF had the potential to become the world’s first trillionaire? Would that change things?
If it does, then this math seems flimsy. So, if we want to reject this flimsiness, then we need to say that Will or Nick’s math is wrong.
But here’s the rub: SBF could have calibrated his own decision theoretic musings to the tune of Nick and Will’s, no? And if he did, that would suggest that Nick and/or Will’s math is dangerous, would it not? And if their math is dangerous, that means that there must be something wrong with EA’s messaging. So perhaps it’s the case that EA—and EV thinking in general—does, in fact, bear some responsibility for this mess.
This brings us to your edit:
Care to elaborate on this point?
How do you know this? Are you sure that the EA message is not the problem? What does it mean to say that a message is a ‘problem’, in this case? Would the EA message be a problem if it were true that had EA never existed, then SBF would never have committed massive financial fraud ?
Because this counterfactual claim seems very likely to be correct. (See this Bloomberg article here.) So this would seem to suggest that EA is part of the problem, no?
Because, surely, if EA is causally responsible for this whole debacle, then “the EA message” is at least part of the problem. Or do you disagree?
If you do disagree, then: What does it mean, in your view, for something to be a “problem”? And what exactly would it take for “the EA message” to be “the problem”?
And last, but certainly not least: Is there anything at all that could convince you that EV reasoning is not infallible?
“What if...? Have you run the calculations? … On the face of it...”
Did you even read the OP? Your comments amount to nothing more than “But my naive utilitarian calculations suggest that these bad acts could really easily be justified after all!” Which is simply non-responsive to the arguments against naive utilitarianism.
I’m not going to repeat the whole OP in response to your comments. You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions. But that’s precisely the mistake that the OP is arguing against.
This is backwards. You are the one repeatedly invoking naive “EV reasoning” (i.e. calculations) as supposedly the true measure of expected value. I’m arguing that true expected value is best approximated when constrained by reliable heuristics.
I mean for it to be false, unjustified, and something we should vociferously warn people against. Not every causal contribution to a bad outcome is a “problem” in this sense. Oxygen also causally contributed to every bad action by a human—without oxygen, the bad act would not have been committed. Even so, oxygen is not the problem.
Where did I say this?
You’re assuming that you responded to my question in the original post. But you didn’t. Your post just says “trust me guys, the math checks out”. But I see no math. So where did you get this from?
“Arguing”? Or asserting?
If these are arguments, they are not quite strong. No one outside of EA is convinced by this post. I’m not sure if you saw, but this post has even become the subject of ridicule on Twitter.
Okay, I didn’t realize we were going back to PHIL 101 here. If you need me to spell this out explicitly: SBF chose his career choice because he was encouraged by prominent EA leaders to earn to give. Without EA, he would have never had the means to start FTX. The-earn-to-give model encourages shady business practices.
The connection is obvious.
Saying this has nothing to do with EA is like saying the Stalin’s governance had nothing to do with Marxism.
Denying the link is delusional and makes us look like a cult.