Are you assuming that āstealing moneyā wouldnāt (or couldnāt possibly?) prove counterproductive to the cause of AI safety research and funding? Because Iām pretty sure thereās no mathematical theorem that rules out the possibility of a criminal action turning out to be counterproductive in practice! And thatās the issue here, not some pristine thought experiment with frictionless planes.
I am using the math that the CEA has pushed. Hereās a quote from The Case For Strong Longtermism by Will MacAskill and Hilary Greaves (page 15):
That would mean that every $100 spent had, on average, an impact as valuable as saving one trillion (resp., one million, 100) lives on our main (resp. low, restricted) estimate ā far more than the near-future benefits of bed net distribution.
If 100 dollars could be morally equivalent to saving one trillion lives, then Iād steal money too.
And here is a quote from Nick Bostromās paper Astronomical Waste (paragraph 3):
Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.
If 10^29 human lives are lost every second we delay technological development, then why wait to get the money through more legitimate means?
Of course, this all could backfire. There are risks involved. But the risks are not enough to avoid making the expected value of fraud extremely high. Taking all this into consideration, then, is it any surprise that SBF did what he did?
And again, this is not my math. This is the math pushed by prominent and leading figures in EA. I am just quoting them. Donāt shoot the messenger.
And on that note: I recommend you watch this YouTube video, and use it as a source of reflection:
That math shows that the stakes are high. But that just means that itās all the more important to make actually prudent choices. My point is that, in the real world, stealing money does not serve the goal of increasing funding for your cause in expectation.
You keep conflating āfunding good causes is really importantā (EA message) with āstealing money is an effective way to fund important causesā (stupid criminal fantasy).
I think itās really important to be clear that the EA message and the stupid criminal fantasy are not remotely the same claim.
Edited to add: it would certainly be bad to hold the two views in conjunction. But, between these two claims, the EA message is not the problem.
My point is that, in the real world, stealing money does not serve the goal of increasing funding for your cause in expectation.
Why not? What if we can generate tens of billions of dollars through fraudulent means? We can buy a lot of utility with that money, after all. Perhaps even save humanity from the brink of extinction.
And what if we think we have a fairly good reason to think that we will get away with it. Surely the expected value would start to look pretty enticing by then, no?
Frankly, Iād like to see your calculations. If you really believe that SBFās fraud did not have net positive value in expectation, then prove it. Do the math for us. At what point does the risk no longer become acceptable, and how much money would it take to offset that risk?
Do you know? Have you run the calculations? Or do you just have faith that the value in expectation will be net negative? Because right now Iām not seeing calculations. I am just seeing unsubstantiated assertions.
But Iāll throw you a bone. For the sake of argument, letās suppose that you crunch the numbers so that the math conveniently works out. Nice.
But is this math consistent with Nick Bostromās math, or Willās math? Is it consistent with the view that 100 dollars donated to AI safety is worth one trillion human lives? Or that every second of delayed technological development is just as bad as the untimely death of 10^29 people?
On the face of it, it seems extremely improbable that this math could be consistent. Because what if Sam was very likely to get away with it, but just got unlucky? Alternatively, what if the risks are higher but SBF had the potential to become the worldās first trillionaire? Would that change things?
If it does, then this math seems flimsy. So, if we want to reject this flimsiness, then we need to say that Will or Nickās math is wrong.
But hereās the rub: SBF could have calibrated his own decision theoretic musings to the tune of Nick and Willās, no? And if he did, that would suggest that Nick and/āor Willās math is dangerous, would it not? And if their math is dangerous, that means that there must be something wrong with EAās messaging. So perhaps itās the case that EAāand EV thinking in generalādoes, in fact, bear some responsibility for this mess.
This brings us to your edit:
the EA message is not the problem.
Care to elaborate on this point?
How do you know this? Are you sure that the EA message is not the problem? What does it mean to say that a message is a āproblemā, in this case? Would the EA message be a problem if it were true that hadEA never existed, then SBF would never have committed massive financial fraud?
Because this counterfactual claim seems very likely to be correct. (See this Bloomberg article here.) So this would seem to suggest that EA is part of the problem, no?
Because, surely, if EA is causally responsible for this whole debacle, then āthe EA messageā is at least part of the problem. Or do you disagree?
If you do disagree, then: What does it mean, in your view, for something to be a āproblemā? And what exactly would it take for āthe EA messageā to be āthe problemā?
And last, but certainly not least: Is there anything at all that could convince you that EV reasoning is not infallible?
āWhat if...? Have you run the calculations? ā¦ On the face of it...ā
Did you even read the OP? Your comments amount to nothing more than āBut my naive utilitarian calculations suggest that these bad acts could really easily be justified after all!ā Which is simply non-responsive to the arguments against naive utilitarianism.
Iām not going to repeat the whole OP in response to your comments. You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions. But thatās precisely the mistake that the OP is arguing against.
Is there anything at all that could convince you that EV reasoning is not infallible?
This is backwards. You are the one repeatedly invoking naive āEV reasoningā (i.e. calculations) as supposedly the true measure of expected value. Iām arguing that true expected value is best approximated when constrained by reliable heuristics.
If you do disagree, then: What does it mean, in your view, for something to be a āproblemā?
I mean for it to be false, unjustified, and something we should vociferously warn people against. Not every causal contribution to a bad outcome is a āproblemā in this sense. Oxygen also causally contributed to every bad action by a humanāwithout oxygen, the bad act would not have been committed. Even so, oxygen is not the problem.
You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions
Where did I say this?
Iām not going to repeat the whole OP in response to your comments.
Youāre assuming that you responded to my question in the original post. But you didnāt. Your post just says ātrust me guys, the math checks outā. But I see no math. So where did you get this from?
Iām arguing that true expected value is best approximated when constrained by reliable heuristics.
āArguingā? Or asserting?
If these are arguments, they are not quite strong. No one outside of EA is convinced by this post. Iām not sure if you saw, but this post has even become the subject of ridicule on Twitter.
Not every causal contribution to a bad outcome is a āproblemā in this sense. Oxygen also causally contributed to every bad action by a humanāwithout oxygen, the bad act would not have been committed.
Okay, I didnāt realize we were going back to PHIL 101 here. If you need me to spell this out explicitly: SBF chose his career choice because he was encouraged by prominent EA leaders to earn to give. Without EA, he would have never had the means to start FTX. The-earn-to-give model encourages shady business practices.
The connection is obvious.
Saying this has nothing to do with EA is like saying the Stalinās governance had nothing to do with Marxism.
Denying the link is delusional and makes us look like a cult.
Are you assuming that āstealing moneyā wouldnāt (or couldnāt possibly?) prove counterproductive to the cause of AI safety research and funding? Because Iām pretty sure thereās no mathematical theorem that rules out the possibility of a criminal action turning out to be counterproductive in practice! And thatās the issue here, not some pristine thought experiment with frictionless planes.
I am using the math that the CEA has pushed. Hereās a quote from The Case For Strong Longtermism by Will MacAskill and Hilary Greaves (page 15):
If 100 dollars could be morally equivalent to saving one trillion lives, then Iād steal money too.
And here is a quote from Nick Bostromās paper Astronomical Waste (paragraph 3):
If 10^29 human lives are lost every second we delay technological development, then why wait to get the money through more legitimate means?
Of course, this all could backfire. There are risks involved. But the risks are not enough to avoid making the expected value of fraud extremely high. Taking all this into consideration, then, is it any surprise that SBF did what he did?
And again, this is not my math. This is the math pushed by prominent and leading figures in EA. I am just quoting them. Donāt shoot the messenger.
And on that note: I recommend you watch this YouTube video, and use it as a source of reflection:
That math shows that the stakes are high. But that just means that itās all the more important to make actually prudent choices. My point is that, in the real world, stealing money does not serve the goal of increasing funding for your cause in expectation.
You keep conflating āfunding good causes is really importantā (EA message) with āstealing money is an effective way to fund important causesā (stupid criminal fantasy).
I think itās really important to be clear that the EA message and the stupid criminal fantasy are not remotely the same claim.
Edited to add: it would certainly be bad to hold the two views in conjunction. But, between these two claims, the EA message is not the problem.
Why not? What if we can generate tens of billions of dollars through fraudulent means? We can buy a lot of utility with that money, after all. Perhaps even save humanity from the brink of extinction.
And what if we think we have a fairly good reason to think that we will get away with it. Surely the expected value would start to look pretty enticing by then, no?
Frankly, Iād like to see your calculations. If you really believe that SBFās fraud did not have net positive value in expectation, then prove it. Do the math for us. At what point does the risk no longer become acceptable, and how much money would it take to offset that risk?
Do you know? Have you run the calculations? Or do you just have faith that the value in expectation will be net negative? Because right now Iām not seeing calculations. I am just seeing unsubstantiated assertions.
But Iāll throw you a bone. For the sake of argument, letās suppose that you crunch the numbers so that the math conveniently works out. Nice.
But is this math consistent with Nick Bostromās math, or Willās math? Is it consistent with the view that 100 dollars donated to AI safety is worth one trillion human lives? Or that every second of delayed technological development is just as bad as the untimely death of 10^29 people?
On the face of it, it seems extremely improbable that this math could be consistent. Because what if Sam was very likely to get away with it, but just got unlucky? Alternatively, what if the risks are higher but SBF had the potential to become the worldās first trillionaire? Would that change things?
If it does, then this math seems flimsy. So, if we want to reject this flimsiness, then we need to say that Will or Nickās math is wrong.
But hereās the rub: SBF could have calibrated his own decision theoretic musings to the tune of Nick and Willās, no? And if he did, that would suggest that Nick and/āor Willās math is dangerous, would it not? And if their math is dangerous, that means that there must be something wrong with EAās messaging. So perhaps itās the case that EAāand EV thinking in generalādoes, in fact, bear some responsibility for this mess.
This brings us to your edit:
Care to elaborate on this point?
How do you know this? Are you sure that the EA message is not the problem? What does it mean to say that a message is a āproblemā, in this case? Would the EA message be a problem if it were true that had EA never existed, then SBF would never have committed massive financial fraud ?
Because this counterfactual claim seems very likely to be correct. (See this Bloomberg article here.) So this would seem to suggest that EA is part of the problem, no?
Because, surely, if EA is causally responsible for this whole debacle, then āthe EA messageā is at least part of the problem. Or do you disagree?
If you do disagree, then: What does it mean, in your view, for something to be a āproblemā? And what exactly would it take for āthe EA messageā to be āthe problemā?
And last, but certainly not least: Is there anything at all that could convince you that EV reasoning is not infallible?
āWhat if...? Have you run the calculations? ā¦ On the face of it...ā
Did you even read the OP? Your comments amount to nothing more than āBut my naive utilitarian calculations suggest that these bad acts could really easily be justified after all!ā Which is simply non-responsive to the arguments against naive utilitarianism.
Iām not going to repeat the whole OP in response to your comments. You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions. But thatās precisely the mistake that the OP is arguing against.
This is backwards. You are the one repeatedly invoking naive āEV reasoningā (i.e. calculations) as supposedly the true measure of expected value. Iām arguing that true expected value is best approximated when constrained by reliable heuristics.
I mean for it to be false, unjustified, and something we should vociferously warn people against. Not every causal contribution to a bad outcome is a āproblemā in this sense. Oxygen also causally contributed to every bad action by a humanāwithout oxygen, the bad act would not have been committed. Even so, oxygen is not the problem.
Where did I say this?
Youāre assuming that you responded to my question in the original post. But you didnāt. Your post just says ātrust me guys, the math checks outā. But I see no math. So where did you get this from?
āArguingā? Or asserting?
If these are arguments, they are not quite strong. No one outside of EA is convinced by this post. Iām not sure if you saw, but this post has even become the subject of ridicule on Twitter.
Okay, I didnāt realize we were going back to PHIL 101 here. If you need me to spell this out explicitly: SBF chose his career choice because he was encouraged by prominent EA leaders to earn to give. Without EA, he would have never had the means to start FTX. The-earn-to-give model encourages shady business practices.
The connection is obvious.
Saying this has nothing to do with EA is like saying the Stalinās governance had nothing to do with Marxism.
Denying the link is delusional and makes us look like a cult.