As I see it, I responded entirely reasonably to the actual text of what you wrote. (Maybe what you wrote gave a misleading impression of what you meant or intended; again, I made no claims about the latter.)
Is there a way to mute comment threads? Pursuing this disagreement further seems unlikely to do anyone any good. For what itās worth, I wish you well, and Iām sorry that I wasnāt able to provide you with the agreement that youāre after.
Itās a bad habit to make up a ridiculous assertion that isnāt in the text youāre responding to, and then respond as if that assertion is in the text. Iāve been gravely concerned about low-probability, high-impact events since before the term āeffective altruismā existed. Iāve discussed this publicly for at least a decade. Iāve discussed it many times on this forum. I donāt need you to tell me to worry about low-probability, high-impact events, and to pretend that my post above is arguing that we should disregard those is uncharitable and discourteous. I donāt you need to explain the concept of expected value. Thatās absurd.
I started writing about existential risk from AI in 2015, based on Nick Bostromās Superintelligence book and other sources. Bostrom, of course, helped popularize the argument that very low probabilities of existential catastrophe have very high expected value. As mentioned in the post, Iāve been writing about AGI for a long time, and Iām clearly familiar with the basic arguments. Itās frustrating that your response was: well, wait, have you ever considered that worrying about low-probability events might be important? Yeah, of course I have. How could anyone familiar with this topic not have considered that?
The majority of people involved in EA currently seem to believe that thereās a 50%+ chance of AGI by 2035 ā with many setting their median year at or before 2032 ā and if that probability is 3-8 orders of magnitude too high, or something like that, because people in EA accept bad arguments and evidence without a reasonable minimum of scientific skepticism or critical appraisal, then thatās worthy of discussion. Thatās the main thing this post is about. Coming to wrong conclusions typically has negative consequences, as does having bad intellectual practices. Those negative consequences may include a failure to guard against existential risk wisely and well (e.g., it could mean the wrong kind of AI safety work is getting funded, rather than AI safety work getting funded too much overall). There may be other important negative consequences as well, such as reputational harm to EA, alienation of longstanding community members (like me), misallocation of funding away from places where itās direly needed (such as non-existentially dangerous pandemics or global poverty or animal welfare), or serious physical or psychological harm to individuals due to a few people holding apocalyptic or millennialist views ā just to name a few. Why not respond to that argument about the EA communityās intellectual practices, which is what is emphasized again and again and again in the post, rather than something that isnāt in the text?
This misunderstanding was a pretty avoidable mistake on your part ā which is okay, mistakes are forgivable ā and it would have been easy to simply acknowledge your mistake after I pointed it out. Why canāt you extend that basic courtesy?
I donāt think you should complain about David Thorstad or anyone else allegedly misrepresenting your arguments when youāre not upholding the principle here that you asked Thorstad to respect. You could have engaged in a more generous disagreement and maybe we could have had a constructive discussion. I was polite and patient the first time I pointed out you got my view wrong, more confrontational the second time, and now Iām pointing it out again in a state of complete expasteration. You had multiple chances to come to the table. What gives?
As I see it, I responded entirely reasonably to the actual text of what you wrote. (Maybe what you wrote gave a misleading impression of what you meant or intended; again, I made no claims about the latter.)
Is there a way to mute comment threads? Pursuing this disagreement further seems unlikely to do anyone any good. For what itās worth, I wish you well, and Iām sorry that I wasnāt able to provide you with the agreement that youāre after.
Itās a bad habit to make up a ridiculous assertion that isnāt in the text youāre responding to, and then respond as if that assertion is in the text. Iāve been gravely concerned about low-probability, high-impact events since before the term āeffective altruismā existed. Iāve discussed this publicly for at least a decade. Iāve discussed it many times on this forum. I donāt need you to tell me to worry about low-probability, high-impact events, and to pretend that my post above is arguing that we should disregard those is uncharitable and discourteous. I donāt you need to explain the concept of expected value. Thatās absurd.
I started writing about existential risk from AI in 2015, based on Nick Bostromās Superintelligence book and other sources. Bostrom, of course, helped popularize the argument that very low probabilities of existential catastrophe have very high expected value. As mentioned in the post, Iāve been writing about AGI for a long time, and Iām clearly familiar with the basic arguments. Itās frustrating that your response was: well, wait, have you ever considered that worrying about low-probability events might be important? Yeah, of course I have. How could anyone familiar with this topic not have considered that?
The majority of people involved in EA currently seem to believe that thereās a 50%+ chance of AGI by 2035 ā with many setting their median year at or before 2032 ā and if that probability is 3-8 orders of magnitude too high, or something like that, because people in EA accept bad arguments and evidence without a reasonable minimum of scientific skepticism or critical appraisal, then thatās worthy of discussion. Thatās the main thing this post is about. Coming to wrong conclusions typically has negative consequences, as does having bad intellectual practices. Those negative consequences may include a failure to guard against existential risk wisely and well (e.g., it could mean the wrong kind of AI safety work is getting funded, rather than AI safety work getting funded too much overall). There may be other important negative consequences as well, such as reputational harm to EA, alienation of longstanding community members (like me), misallocation of funding away from places where itās direly needed (such as non-existentially dangerous pandemics or global poverty or animal welfare), or serious physical or psychological harm to individuals due to a few people holding apocalyptic or millennialist views ā just to name a few. Why not respond to that argument about the EA communityās intellectual practices, which is what is emphasized again and again and again in the post, rather than something that isnāt in the text?
This misunderstanding was a pretty avoidable mistake on your part ā which is okay, mistakes are forgivable ā and it would have been easy to simply acknowledge your mistake after I pointed it out. Why canāt you extend that basic courtesy?
I donāt think you should complain about David Thorstad or anyone else allegedly misrepresenting your arguments when youāre not upholding the principle here that you asked Thorstad to respect. You could have engaged in a more generous disagreement and maybe we could have had a constructive discussion. I was polite and patient the first time I pointed out you got my view wrong, more confrontational the second time, and now Iām pointing it out again in a state of complete expasteration. You had multiple chances to come to the table. What gives?