AndreaSR - Effective Altruism forum viewer
https://ea.greaterwrong.com/
AndreaSR - Effective Altruism forum viewerxml-emitteren-usComment by AndreaSR on Help me understand this expected value calculation
https://ea.greaterwrong.com/posts/GcHekCvvRhEovyPjb/help-me-understand-this-expected-value-calculation#comment-duuJyg4Cy3ABoPxpS
<p>Yeah, I’ve had the same thought. But as far as I can tell, it still doesn’t add up, so I figured there must be something else going on. Thanks for your reply, though.</p>AndreaSRduuJyg4Cy3ABoPxpSFri, 15 Oct 2021 17:11:56 +0000Comment by AndreaSR on Help me understand this expected value calculation
https://ea.greaterwrong.com/posts/GcHekCvvRhEovyPjb/help-me-understand-this-expected-value-calculation#comment-eSLhgbGoEnt4C9FFY
<p>Thanks for your reply. I’m glad my calculation doesn’t seem way off. Still feel like it’s too obvious a mistake for it not to have been caught, if it indeed were a mistake...</p>AndreaSReSLhgbGoEnt4C9FFYFri, 15 Oct 2021 17:11:02 +0000Help me understand this expected value calculation by AndreaSR
https://ea.greaterwrong.com/posts/GcHekCvvRhEovyPjb/help-me-understand-this-expected-value-calculation
<p>Hi there!</p><p>I’m looking at one of Bostrom’s papers (Existential Risk Prevention as Global Priority, p. 19). He includes this expected value calculation which I just can’t make sense of:</p><p>“Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilisation [he’s referring to his estimate of 10^52 future lives here] a mere 1 per cent chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”</p><p>When trying to repeat his calculation, I reason as follows: reducing the risk of losing 10^50 expected lives by 10^-20 - that’s the same as increasing the probability of getting 10^50 by 10^-20. So, it should go 10^50*10^-20 = 10^30. However, he writes that the expected value of this change is equal to 10^20 lives. It’s a fairly trivial calculation, so I assume there’s something obvious I’ve overlooked. Can you help me see what I’m missing?</p>AndreaSRGcHekCvvRhEovyPjbThu, 14 Oct 2021 06:23:33 +0000Comment by AndreaSR on Pascal's Mugging and abandoning credences
https://ea.greaterwrong.com/posts/kHeRZPGzRvgJG7Mix/pascal-s-mugging-and-abandoning-credences#comment-Yz6ArDThwwrrYtxit
<p>Thanks for your answer. I don’t think I under stand what you’re saying, though. As I understand it, it makes a huge difference to the resource distribution that longtermism recommends, because if you allow for e.g. Bostrom’s 10^52 happy lives to be the baseline utility, avoiding x-risk becomes vastly more important than if you just consider the 10^10 people alive today. Right?</p>AndreaSRYz6ArDThwwrrYtxitFri, 09 Jul 2021 17:01:28 +0000Comment by AndreaSR on Pascal's Mugging and abandoning credences
https://ea.greaterwrong.com/posts/kHeRZPGzRvgJG7Mix/pascal-s-mugging-and-abandoning-credences#comment-tTfe4zw2sFMaijwKe
<p>Thanks for your reply. A follow-up question: when I see the ‘cancelling out’-argument, I always wonder why it doesn’t apply to the x-risk case itself. It seems to me that you could just as easily argue that halting biotech research in order to enter the Long Reflection might backfire in some unpredictable way, or that aiming at Bostrom’s utopia would ruin the chances of ending up in a vastly better state that we had never even dreamt of—and so on and so forth.</p><p>Isn’t the whole case for longtermism so empirically uncertain as to be open to the ‘cancelling out’-argument as well?</p><p>Hope it makes sense what I’m saying.</p>AndreaSRtTfe4zw2sFMaijwKeFri, 09 Jul 2021 16:53:48 +0000Pascal’s Mugging and abandoning credences by AndreaSR
https://ea.greaterwrong.com/posts/kHeRZPGzRvgJG7Mix/pascal-s-mugging-and-abandoning-credences
<p>What are the theoretical obstacles to abandoning expected utility calculations regarding extremities like x-risk from a rogue AI system in order to avoid biting the bullet on Pascal’s Mugging? Does Bayesian epistemology really require that we assign a credence to any proposition at all and if so—shouldn’t we reject this framework in order to avoid fanaticism? It does not seem rational to me that we should assign credences to e.g. the success of specific x-risk mitigation interventions when there are so many unknown unknowns governing the eventual outcome.</p><p>I hope you can help me sort out this confusion.</p>AndreaSRkHeRZPGzRvgJG7MixFri, 09 Jul 2021 10:18:00 +0000