Writing is done for an audience. Effective altruists have a very particular practice of stating their personal credences in the hypotheses that they discuss. While this is not my practice, in writing for effective altruists I try to be as precise as I can about the relative plausibility that I assign to various hypotheses and the effect that this might have on their expected value.
When writing for academic audiences, I do not discuss uncertainty unless I have something to add which my audience will find to be novel and adequately supported.
I don’t remind academic readers that uncertainty matters, because all of them know that on many moral theories uncertainty matters and many (but not all) accept such theories. I don’t remind academic readers of how uncertainty matters on some popular approaches, such as expected value theory, because all of my readers know this and many (but fewer) accept such theories. The most likely result of invoking expected value theory would be to provoke protests that I am situating my argument within a framework which some of my readers do not accept, and that would be a distraction.
I don’t state my personal probability assignments to claims such as the time of perils hypothesis because I don’t take myself to have given adequate grounds for a probability assignment. Readers would rightly object that my subjective probability assignments had not been adequately supported by the arguments in the paper, and I would be forced to remove them by referees, if the paper were not rejected out of hand.
For the same reason, I don’t use language forcing my personal probability assignments on readers. There are always more arguments to consider, and readers differ quite dramatically in their priors. For that reason, concluding a paper with the conclusion that a claim like the time of perils hypothesis has a probability on the order of 10^(-100) or 10^(-200) would again, rightly provoke the objection that this claim has not been adequately supported.
When I write, for example, that arguments for the time of perils hypothesis are inconclusive, my intention is to allow readers to make up their own minds as to precisely how poorly those arguments fare and what the resulting probability assignments should be. Academic readers very much dislike being told what to think, and they don’t care a whit for what I think.
As a data point, almost all of my readers are substantially less confident in many of the claims that I criticize than I am. The most common reason why my papers criticizing effective altruism are rejected from leading journals is that referees or editors take the positions criticized to be so poor that they do not warrant comment. (For example, my paper on power-seeking theorems was rejected from BJPS by an editor who wrote, “The arguments critically evaluated in the paper are just all over the place, verging from silly napkin-math, to speculative metaphysics, to formal explorations of reinforcement learning agents. A small minority of that could be considered philosophy of computer science, but the rest of it, in my view, is computer scientists verging into bad philosophy of mind and futurism … The targets of this criticism definitely want to pretend they’re doing science; I worry that publishing a critical takedown of these arguments could lend legitimacy to that appearance.”)
Against this background, there is not much pressure to remind readers that the positions in question could be highly improbable. Most think this already, and the only thing I am likely to do is to provoke quick rejections like the above, or to annoy the inevitable referee (an outlier among my readers) selected for their sympathies with the position being criticized.
To tell the truth, I often try to be even more noncommittal in the language of my papers than the published version would suggest. For example, the submitted draft of “Mistakes in the moral mathematics of existential risk” said in the introduction that “under many assumptions, once these mistakes are corrected, the value of existential risk mitigation will be far from astronomical.” A referee complained that this was not strong enough, because (on their view) the only assumptions worth considering were those on which the value of existential risk mitigation is rendered extremely minimal. So I changed the wording to “Under many assumptions, once these mistakes are corrected, short-termist interventions will be more valuable than long-termist interventions, even within models proposed by leading effective altruists.” Why did I discuss these assumptions, instead of a broader class of assumptions under which the value of existential risk mitigation is merely non-astronomical? Because that’s what my audience wanted to talk about.
In general, I would encourage you to focus in your writing on the substantive descriptive and normative issues that divide you from your opponents. Anyone worth talking to understands how uncertainty works. The most interesting divisions are not elementary mistakes about multiplication, but substantive questions about probabilities, utilities, decision theories, and the like. You will make more significant contributions to the state of the discussion if you focus on identifying the most important claims that in fact divide you from your opponents and on giving extended arguments for those claims.
To invent and claim to resolve disagreements based on elementary fallacies is likely to have the effect of pushing away the few philosophers still genuinely willing to have substantive normative and descriptive conversations with effective altruists. We are not enthusiastic about trivialities.
To be fair to Richard, there is a difference between a) stating your own personal probability in time of perils and b) making clear that for long-termist arguments to fail solely because they rely on time of perils, you need it to have extremely low probability, not just low, at least if you accept the expected value theory and subjective probability estimates can legitimately be applied at all here, as you seemed to be doing for the sake of making an internal critique. I took it to be the latter that Richard was complaining your paper doesn’t do.
How strong do you think your evidence is for most readers of philosophy papers think the claim that X-risk is currently high, but will go permanently very low” is extremely implausible? If you asked me to guess I’d say most people’s reaction would be more like “I’ve no idea how plausible this is, other than definitely quite unlikely”, which is very different, but I have no experience with reviewers here.
I am a bit -not necessarily entirely-skeptical of the “everyone really knows EA work outside development and animal welfare is trash” vibe of your post. I don’t doubt a lot of people do think that in professional philosophy. But at the same time, Nick Bostrom is more highly cited than virtually any reviewer you will have encountered. Long-termist moral philosophy turns up in leading journals constantly. One of the people you critiqued in your very good paper attacking arguments for the singularity is Dave Chalmers, and you literally don’t get more professionally distinguished in analytic philosophy than Dave. Your stuff criticizing long-termism seems to have made it into top journals too when I checked, which indicates there certainly are people who think it is not too silly to be worth refuting: https://www.dthorstad.com/papers
Hi David, I’m afraid you might have gotten caught up in a tangent here! The main point of my comment was that your post criticizes me on the basis of a misrepresentation. You claim that my “primary argumentative move is to assign nontrivial probabilities without substantial new evidence,” but actually that’s false. That’s just not what my blog post was about.
In retrospect, I think my attempt to briefly summarize what my post was about was too breezy, and misled many into thinking that its point was trivial. But it really isn’t. (In fact, I’d say that my core point there about taking higher-order uncertainty into account is far more substantial and widely neglected than the “naming game” fallacy that you discuss in the present post!) I mention in another comment how it applied to Schwitzgebel’s “negligibility argument” against longtermism, for example, where he very explicitly relies on a single constant probability model in order to make his case. Failing to adequately take model uncertainty into account is a subtle and easily-overlooked mistake!
A lot of your comment here seems to misunderstand my criticism of your earlier paper. I’m not objecting that you failed to share your personal probabilities. I’m objecting that your paper gives the impression that longtermism is undermined so long as the time of perils hypothesis is judged to be likely false. But actually the key question is whether its probability is negligible. Your paper fails to make clear what the key question to assess is, and the point of my ‘Rule High Stakes In’ post is to explain why it’s really the question of negligibility that matters.
To keep discussions clean and clear, I’d prefer to continue discussion of my other post over on that post rather than here. Again, my objection to this post is simply that it misrepresented me.
Thanks Richard!
Writing is done for an audience. Effective altruists have a very particular practice of stating their personal credences in the hypotheses that they discuss. While this is not my practice, in writing for effective altruists I try to be as precise as I can about the relative plausibility that I assign to various hypotheses and the effect that this might have on their expected value.
When writing for academic audiences, I do not discuss uncertainty unless I have something to add which my audience will find to be novel and adequately supported.
I don’t remind academic readers that uncertainty matters, because all of them know that on many moral theories uncertainty matters and many (but not all) accept such theories. I don’t remind academic readers of how uncertainty matters on some popular approaches, such as expected value theory, because all of my readers know this and many (but fewer) accept such theories. The most likely result of invoking expected value theory would be to provoke protests that I am situating my argument within a framework which some of my readers do not accept, and that would be a distraction.
I don’t state my personal probability assignments to claims such as the time of perils hypothesis because I don’t take myself to have given adequate grounds for a probability assignment. Readers would rightly object that my subjective probability assignments had not been adequately supported by the arguments in the paper, and I would be forced to remove them by referees, if the paper were not rejected out of hand.
For the same reason, I don’t use language forcing my personal probability assignments on readers. There are always more arguments to consider, and readers differ quite dramatically in their priors. For that reason, concluding a paper with the conclusion that a claim like the time of perils hypothesis has a probability on the order of 10^(-100) or 10^(-200) would again, rightly provoke the objection that this claim has not been adequately supported.
When I write, for example, that arguments for the time of perils hypothesis are inconclusive, my intention is to allow readers to make up their own minds as to precisely how poorly those arguments fare and what the resulting probability assignments should be. Academic readers very much dislike being told what to think, and they don’t care a whit for what I think.
As a data point, almost all of my readers are substantially less confident in many of the claims that I criticize than I am. The most common reason why my papers criticizing effective altruism are rejected from leading journals is that referees or editors take the positions criticized to be so poor that they do not warrant comment. (For example, my paper on power-seeking theorems was rejected from BJPS by an editor who wrote, “The arguments critically evaluated in the paper are just all over the place, verging from silly napkin-math, to speculative metaphysics, to formal explorations of reinforcement learning agents. A small minority of that could be considered philosophy of computer science, but the rest of it, in my view, is computer scientists verging into bad philosophy of mind and futurism … The targets of this criticism definitely want to pretend they’re doing science; I worry that publishing a critical takedown of these arguments could lend legitimacy to that appearance.”)
Against this background, there is not much pressure to remind readers that the positions in question could be highly improbable. Most think this already, and the only thing I am likely to do is to provoke quick rejections like the above, or to annoy the inevitable referee (an outlier among my readers) selected for their sympathies with the position being criticized.
To tell the truth, I often try to be even more noncommittal in the language of my papers than the published version would suggest. For example, the submitted draft of “Mistakes in the moral mathematics of existential risk” said in the introduction that “under many assumptions, once these mistakes are corrected, the value of existential risk mitigation will be far from astronomical.” A referee complained that this was not strong enough, because (on their view) the only assumptions worth considering were those on which the value of existential risk mitigation is rendered extremely minimal. So I changed the wording to “Under many assumptions, once these mistakes are corrected, short-termist interventions will be more valuable than long-termist interventions, even within models proposed by leading effective altruists.” Why did I discuss these assumptions, instead of a broader class of assumptions under which the value of existential risk mitigation is merely non-astronomical? Because that’s what my audience wanted to talk about.
In general, I would encourage you to focus in your writing on the substantive descriptive and normative issues that divide you from your opponents. Anyone worth talking to understands how uncertainty works. The most interesting divisions are not elementary mistakes about multiplication, but substantive questions about probabilities, utilities, decision theories, and the like. You will make more significant contributions to the state of the discussion if you focus on identifying the most important claims that in fact divide you from your opponents and on giving extended arguments for those claims.
To invent and claim to resolve disagreements based on elementary fallacies is likely to have the effect of pushing away the few philosophers still genuinely willing to have substantive normative and descriptive conversations with effective altruists. We are not enthusiastic about trivialities.
To be fair to Richard, there is a difference between a) stating your own personal probability in time of perils and b) making clear that for long-termist arguments to fail solely because they rely on time of perils, you need it to have extremely low probability, not just low, at least if you accept the expected value theory and subjective probability estimates can legitimately be applied at all here, as you seemed to be doing for the sake of making an internal critique. I took it to be the latter that Richard was complaining your paper doesn’t do.
How strong do you think your evidence is for most readers of philosophy papers think the claim that X-risk is currently high, but will go permanently very low” is extremely implausible? If you asked me to guess I’d say most people’s reaction would be more like “I’ve no idea how plausible this is, other than definitely quite unlikely”, which is very different, but I have no experience with reviewers here.
I am a bit -not necessarily entirely-skeptical of the “everyone really knows EA work outside development and animal welfare is trash” vibe of your post. I don’t doubt a lot of people do think that in professional philosophy. But at the same time, Nick Bostrom is more highly cited than virtually any reviewer you will have encountered. Long-termist moral philosophy turns up in leading journals constantly. One of the people you critiqued in your very good paper attacking arguments for the singularity is Dave Chalmers, and you literally don’t get more professionally distinguished in analytic philosophy than Dave. Your stuff criticizing long-termism seems to have made it into top journals too when I checked, which indicates there certainly are people who think it is not too silly to be worth refuting: https://www.dthorstad.com/papers
Hi David, I’m afraid you might have gotten caught up in a tangent here! The main point of my comment was that your post criticizes me on the basis of a misrepresentation. You claim that my “primary argumentative move is to assign nontrivial probabilities without substantial new evidence,” but actually that’s false. That’s just not what my blog post was about.
In retrospect, I think my attempt to briefly summarize what my post was about was too breezy, and misled many into thinking that its point was trivial. But it really isn’t. (In fact, I’d say that my core point there about taking higher-order uncertainty into account is far more substantial and widely neglected than the “naming game” fallacy that you discuss in the present post!) I mention in another comment how it applied to Schwitzgebel’s “negligibility argument” against longtermism, for example, where he very explicitly relies on a single constant probability model in order to make his case. Failing to adequately take model uncertainty into account is a subtle and easily-overlooked mistake!
A lot of your comment here seems to misunderstand my criticism of your earlier paper. I’m not objecting that you failed to share your personal probabilities. I’m objecting that your paper gives the impression that longtermism is undermined so long as the time of perils hypothesis is judged to be likely false. But actually the key question is whether its probability is negligible. Your paper fails to make clear what the key question to assess is, and the point of my ‘Rule High Stakes In’ post is to explain why it’s really the question of negligibility that matters.
To keep discussions clean and clear, I’d prefer to continue discussion of my other post over on that post rather than here. Again, my objection to this post is simply that it misrepresented me.