@Richard Y Chappellšø, would you please do me the courtesy of acknowledging that you misunderstood my argument? I think this was a rather uncharitable reading on your part and would have been fairly easy to avoid. Your misreading was not explicitly forestalled by the text but not supported by the text, either, and there was much in the text to suggest I did not hold the view that you took to be the thesis or argument. I found your misreading discourteous for that reason.
Much of the post is focused on bad intellectual practices, such as:
Not admitting you got a prediction wrong after you got it wrong
Repeating the same prediction multiple times in a row and repeatedly getting it wrong, and seemingly not learning anything
Making fake graphs with false data, no data, dubious units of measurement, no units of measurements, and other problems or inaccuracies
Psychological or social psychological biases like millennialist cognitive bias, bias resulting from the intellectual and social insularity of the EA community, and possible confirmation bias (e.g. why hasnāt Toby Ordās RL scaling post gotten much more attention?)
Acceptance or tolerance of arguments and assertions that are really weak, unsupported, or sometimes just bad
I donāt interpret your comment as a defense or endorsement of any of these practices (although I could if I wanted to be combative and discourteous). Iām assuming you donāt endorse these practices and your comment was not intended as a defense of them.
So, why reply to a post that is largely focused on those things as if the thesis or argument or thrust of the post is something other than that, and which was not said in the text?
On the somewhat more narrow point of AI capabilities optimism, I think the AI bubble popping within the next 5 years or so would be strong evidence that the EA communityās AI capabilities optimism has been misplaced. If the large majority of people in the EA community only thought thereās a 0.1% chance or a 1% chance of AGI within a decade, then the AI bubble popping might not be that surprising from their point of view. But the actual majority view seems to be more like a 50%+ chance of AGI within a decade. My impression from discussions with various people in the EA community is that many of them would find it surprising if the AI bubble popped.
The difference between a 50%+ chance of AGI within a decade and a 0.1% chance is a lot from an epistemic perspective, even if, just for the sake of argument, it makes absolutely no difference for precautionary arguments about AI safety. So, I think misestimating the probability by that much would be worthy of discussion, even if ā for the sake of argument ā it doesnāt change the underlying case for AI safety.
It is especially worthy of discussion if the misestimation is influenced by bad intellectual practices, such as those listed above. All the information needed to diagnose those intellectual practices as bad is available today, so the AI bubble popping isnāt necessary. However, people in the EA community may be reluctant to give a hard look at them without some big external event like an AI bubble popping shaking them up. As I said in the post, Iām pessimistic that even after the AI bubble pops, people in the EA community will, even then, be willing to examine these intellectual practices and acknowledge that theyāre bad. But itās worth a shot for me to say something about it anyway.
There are many practical reasons to worry about bad intellectual practices. For example, people in AI safety should worry about whether theyāre making existential risk from AGI better or worse, and having bad intellectual practices on a systemic or widespread level will make it more likely theyāll screw this up. Or, given that, according to Denkenberger in another comment on this post, funding around existential risk from AGI has significantly taken away funding around other existential risks, overestimating existential risk from AGI based on bad intellectual practices might (counterfactually) increase total existential risk just by causing funding to be less wisely allocated. And, of course, there are many other reasons to worry about bad intellectual practices, especially if they are prevalent in a community and culturally supported by that community.
We both could list reasons on and on why thinking badly might lead to doing badly. Just one more example Iāll bring up is that, in practice, most AI safety work seems to make rather definite, specific assumptions about the underlying technical nature of AGI. If AI safety has (by and large) identified an implausible AI paradigm to underlie AGI out of at least several far more plausible and widely-known candidates (largely as a result of the bad intellectual practices listed above), then AI safety will be far less effective at achieving its goals. There might still be a strong precautionary argument for doing AI safety work on even that implausible AI paradigm, but given that AI safety, has, in practice, for the most part, bet on one specific horse and not the others, it is a problem to pick the wrong paradigm. You could maybe argue for an allocation of resources weighted to different AI paradigms based on their perceived plausibility, but that would still result in a large reallocation of resources if the paradigm AI safety is betting on is highly implausible and there are several other candidates that are much more plausible. So, I think this is a fair line of argument.
What matters is not just some unidimensional measure of the EA communityās beliefs like the median year of AGI or the probability of AGI within a certain timeframe or the probability of global catastrophe from AGI (conditional on its creation, or within a certain timeframe). If bad intellectual practices make that number go up too high, itās not necessarily just fine on precautionary grounds, it can mean existential risk is increased.
Honestly, I still think my comment was a good one! I responded to what struck me as themost cruxy claim in your post, explaining why I found it puzzling and confused-seeming. I then offered what I regard as an important corrective to a bad style of thinking that your post might encourage, whatever your intentions. (I made no claims about your intentions.) Youāre free to view things differently, but I disagree that there is anything ādiscourteousā about any of this.
Going back to the OPās claims about what is or isnāt āa good way to argue,ā I think itās important to pay attention to the actual text of what someone wrote. Thatās what my blog post did, and itās annoying to be subject to criticism (and now downvoting) from people who arenāt willing to extend the same basic courtesy to me.
You misunderstood my argument based on a misreading of the text. Simple as that.
Please extend the same courtesy to others that you request for yourself. Otherwise, itās just ārules for thee but for not meā.
As I see it, I responded entirely reasonably to the actual text of what you wrote. (Maybe what you wrote gave a misleading impression of what you meant or intended; again, I made no claims about the latter.)
Is there a way to mute comment threads? Pursuing this disagreement further seems unlikely to do anyone any good. For what itās worth, I wish you well, and Iām sorry that I wasnāt able to provide you with the agreement that youāre after.
Itās a bad habit to make up a ridiculous assertion that isnāt in the text youāre responding to, and then respond as if that assertion is in the text. Iāve been gravely concerned about low-probability, high-impact events since before the term āeffective altruismā existed. Iāve discussed this publicly for at least a decade. Iāve discussed it many times on this forum. I donāt need you to tell me to worry about low-probability, high-impact events, and to pretend that my post above is arguing that we should disregard those is uncharitable and discourteous. I donāt you need to explain the concept of expected value. Thatās absurd.
I started writing about existential risk from AI in 2015, based on Nick Bostromās Superintelligence book and other sources. Bostrom, of course, helped popularize the argument that very low probabilities of existential catastrophe have very high expected value. As mentioned in the post, Iāve been writing about AGI for a long time, and Iām clearly familiar with the basic arguments. Itās frustrating that your response was: well, wait, have you ever considered that worrying about low-probability events might be important? Yeah, of course I have. How could anyone familiar with this topic not have considered that?
The majority of people involved in EA currently seem to believe that thereās a 50%+ chance of AGI by 2035 ā with many setting their median year at or before 2032 ā and if that probability is 3-8 orders of magnitude too high, or something like that, because people in EA accept bad arguments and evidence without a reasonable minimum of scientific skepticism or critical appraisal, then thatās worthy of discussion. Thatās the main thing this post is about. Coming to wrong conclusions typically has negative consequences, as does having bad intellectual practices. Those negative consequences may include a failure to guard against existential risk wisely and well (e.g., it could mean the wrong kind of AI safety work is getting funded, rather than AI safety work getting funded too much overall). There may be other important negative consequences as well, such as reputational harm to EA, alienation of longstanding community members (like me), misallocation of funding away from places where itās direly needed (such as non-existentially dangerous pandemics or global poverty or animal welfare), or serious physical or psychological harm to individuals due to a few people holding apocalyptic or millennialist views ā just to name a few. Why not respond to that argument about the EA communityās intellectual practices, which is what is emphasized again and again and again in the post, rather than something that isnāt in the text?
This misunderstanding was a pretty avoidable mistake on your part ā which is okay, mistakes are forgivable ā and it would have been easy to simply acknowledge your mistake after I pointed it out. Why canāt you extend that basic courtesy?
I donāt think you should complain about David Thorstad or anyone else allegedly misrepresenting your arguments when youāre not upholding the principle here that you asked Thorstad to respect. You could have engaged in a more generous disagreement and maybe we could have had a constructive discussion. I was polite and patient the first time I pointed out you got my view wrong, more confrontational the second time, and now Iām pointing it out again in a state of complete expasteration. You had multiple chances to come to the table. What gives?
@Richard Y Chappellšø, would you please do me the courtesy of acknowledging that you misunderstood my argument? I think this was a rather uncharitable reading on your part and would have been fairly easy to avoid. Your misreading was not explicitly forestalled by the text but not supported by the text, either, and there was much in the text to suggest I did not hold the view that you took to be the thesis or argument. I found your misreading discourteous for that reason.
Much of the post is focused on bad intellectual practices, such as:
Not admitting you got a prediction wrong after you got it wrong
Repeating the same prediction multiple times in a row and repeatedly getting it wrong, and seemingly not learning anything
Making fake graphs with false data, no data, dubious units of measurement, no units of measurements, and other problems or inaccuracies
Psychological or social psychological biases like millennialist cognitive bias, bias resulting from the intellectual and social insularity of the EA community, and possible confirmation bias (e.g. why hasnāt Toby Ordās RL scaling post gotten much more attention?)
Acceptance or tolerance of arguments and assertions that are really weak, unsupported, or sometimes just bad
I donāt interpret your comment as a defense or endorsement of any of these practices (although I could if I wanted to be combative and discourteous). Iām assuming you donāt endorse these practices and your comment was not intended as a defense of them.
So, why reply to a post that is largely focused on those things as if the thesis or argument or thrust of the post is something other than that, and which was not said in the text?
On the somewhat more narrow point of AI capabilities optimism, I think the AI bubble popping within the next 5 years or so would be strong evidence that the EA communityās AI capabilities optimism has been misplaced. If the large majority of people in the EA community only thought thereās a 0.1% chance or a 1% chance of AGI within a decade, then the AI bubble popping might not be that surprising from their point of view. But the actual majority view seems to be more like a 50%+ chance of AGI within a decade. My impression from discussions with various people in the EA community is that many of them would find it surprising if the AI bubble popped.
The difference between a 50%+ chance of AGI within a decade and a 0.1% chance is a lot from an epistemic perspective, even if, just for the sake of argument, it makes absolutely no difference for precautionary arguments about AI safety. So, I think misestimating the probability by that much would be worthy of discussion, even if ā for the sake of argument ā it doesnāt change the underlying case for AI safety.
It is especially worthy of discussion if the misestimation is influenced by bad intellectual practices, such as those listed above. All the information needed to diagnose those intellectual practices as bad is available today, so the AI bubble popping isnāt necessary. However, people in the EA community may be reluctant to give a hard look at them without some big external event like an AI bubble popping shaking them up. As I said in the post, Iām pessimistic that even after the AI bubble pops, people in the EA community will, even then, be willing to examine these intellectual practices and acknowledge that theyāre bad. But itās worth a shot for me to say something about it anyway.
There are many practical reasons to worry about bad intellectual practices. For example, people in AI safety should worry about whether theyāre making existential risk from AGI better or worse, and having bad intellectual practices on a systemic or widespread level will make it more likely theyāll screw this up. Or, given that, according to Denkenberger in another comment on this post, funding around existential risk from AGI has significantly taken away funding around other existential risks, overestimating existential risk from AGI based on bad intellectual practices might (counterfactually) increase total existential risk just by causing funding to be less wisely allocated. And, of course, there are many other reasons to worry about bad intellectual practices, especially if they are prevalent in a community and culturally supported by that community.
We both could list reasons on and on why thinking badly might lead to doing badly. Just one more example Iāll bring up is that, in practice, most AI safety work seems to make rather definite, specific assumptions about the underlying technical nature of AGI. If AI safety has (by and large) identified an implausible AI paradigm to underlie AGI out of at least several far more plausible and widely-known candidates (largely as a result of the bad intellectual practices listed above), then AI safety will be far less effective at achieving its goals. There might still be a strong precautionary argument for doing AI safety work on even that implausible AI paradigm, but given that AI safety, has, in practice, for the most part, bet on one specific horse and not the others, it is a problem to pick the wrong paradigm. You could maybe argue for an allocation of resources weighted to different AI paradigms based on their perceived plausibility, but that would still result in a large reallocation of resources if the paradigm AI safety is betting on is highly implausible and there are several other candidates that are much more plausible. So, I think this is a fair line of argument.
What matters is not just some unidimensional measure of the EA communityās beliefs like the median year of AGI or the probability of AGI within a certain timeframe or the probability of global catastrophe from AGI (conditional on its creation, or within a certain timeframe). If bad intellectual practices make that number go up too high, itās not necessarily just fine on precautionary grounds, it can mean existential risk is increased.
Honestly, I still think my comment was a good one! I responded to what struck me as the most cruxy claim in your post, explaining why I found it puzzling and confused-seeming. I then offered what I regard as an important corrective to a bad style of thinking that your post might encourage, whatever your intentions. (I made no claims about your intentions.) Youāre free to view things differently, but I disagree that there is anything ādiscourteousā about any of this.
To quote you from another thread:
You misunderstood my argument based on a misreading of the text. Simple as that.
Please extend the same courtesy to others that you request for yourself. Otherwise, itās just ārules for thee but for not meā.
As I see it, I responded entirely reasonably to the actual text of what you wrote. (Maybe what you wrote gave a misleading impression of what you meant or intended; again, I made no claims about the latter.)
Is there a way to mute comment threads? Pursuing this disagreement further seems unlikely to do anyone any good. For what itās worth, I wish you well, and Iām sorry that I wasnāt able to provide you with the agreement that youāre after.
Itās a bad habit to make up a ridiculous assertion that isnāt in the text youāre responding to, and then respond as if that assertion is in the text. Iāve been gravely concerned about low-probability, high-impact events since before the term āeffective altruismā existed. Iāve discussed this publicly for at least a decade. Iāve discussed it many times on this forum. I donāt need you to tell me to worry about low-probability, high-impact events, and to pretend that my post above is arguing that we should disregard those is uncharitable and discourteous. I donāt you need to explain the concept of expected value. Thatās absurd.
I started writing about existential risk from AI in 2015, based on Nick Bostromās Superintelligence book and other sources. Bostrom, of course, helped popularize the argument that very low probabilities of existential catastrophe have very high expected value. As mentioned in the post, Iāve been writing about AGI for a long time, and Iām clearly familiar with the basic arguments. Itās frustrating that your response was: well, wait, have you ever considered that worrying about low-probability events might be important? Yeah, of course I have. How could anyone familiar with this topic not have considered that?
The majority of people involved in EA currently seem to believe that thereās a 50%+ chance of AGI by 2035 ā with many setting their median year at or before 2032 ā and if that probability is 3-8 orders of magnitude too high, or something like that, because people in EA accept bad arguments and evidence without a reasonable minimum of scientific skepticism or critical appraisal, then thatās worthy of discussion. Thatās the main thing this post is about. Coming to wrong conclusions typically has negative consequences, as does having bad intellectual practices. Those negative consequences may include a failure to guard against existential risk wisely and well (e.g., it could mean the wrong kind of AI safety work is getting funded, rather than AI safety work getting funded too much overall). There may be other important negative consequences as well, such as reputational harm to EA, alienation of longstanding community members (like me), misallocation of funding away from places where itās direly needed (such as non-existentially dangerous pandemics or global poverty or animal welfare), or serious physical or psychological harm to individuals due to a few people holding apocalyptic or millennialist views ā just to name a few. Why not respond to that argument about the EA communityās intellectual practices, which is what is emphasized again and again and again in the post, rather than something that isnāt in the text?
This misunderstanding was a pretty avoidable mistake on your part ā which is okay, mistakes are forgivable ā and it would have been easy to simply acknowledge your mistake after I pointed it out. Why canāt you extend that basic courtesy?
I donāt think you should complain about David Thorstad or anyone else allegedly misrepresenting your arguments when youāre not upholding the principle here that you asked Thorstad to respect. You could have engaged in a more generous disagreement and maybe we could have had a constructive discussion. I was polite and patient the first time I pointed out you got my view wrong, more confrontational the second time, and now Iām pointing it out again in a state of complete expasteration. You had multiple chances to come to the table. What gives?