This is true, but seems to be responding to tone rather than the substance of the argument. And given that (I think) we’re interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.
The actual issue that is relevant here, which isn’t well identified, is that naive expected value fails in a number of ways. Some of these are legitimate criticisms, albeit not well formulated in the paper. Specifically I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.
seems to be responding to tone rather than the substance of the argument.
That’s roughly true for me saying “In any case, “utterly oblivious” seems to me to be both a rude phrasing and a strong claim.”
But I don’t think it’s true for my comment as a whole. Masrani makes specific claims here, and the claims are inaccurate.
And given that (I think) we’re interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.
I think steelmanning is often really useful. But I think there’s also valuing in noticing when a person/post/whatever is just actually incorrect about something, and in trying to understand what arguments they’re actually making. Some reasons:
Making it less likely that other people walk away remembering the incorrect claim as actually true
Prioritising which arguments/criticisms to bother engaging with
We obviously shouldn’t choose arguments at random from the entire pool of available arguments in the world, or the entire pool of available arguments on a given topic. It’s probably often more efficient to engage with arguments that are already quite strong, rather than steelmanning less strong arguments that we happen to have stumbled upon
So here I’m actually not solely interested in the substantive questions raised by Masrani’s post, but also in countering misconceptions that I think the post may have generated, and giving indications of why I think people might find it more useful to engage with other criticisms of longtermism instead (e.g., the ones linked to in the body of my post itself).
One final thing worth noting is that this was a quickly produced post adapting notes I’d made anyway. I do think that if I’d spent quite a while on this, it’d be fair to say “Why didn’t you just talk about the best arguments against longtermism, and the points missing from Greaves & MacAskill, instead?”
I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.
Yeah, I imagine there are many things in this vicinity that Greaves & MacAskill didn’t cover yet that are relevant to the case for strong longtermism or how to implement it in practice, and I’d be happy to see (a) recommendations of sources where those things are discussed well, and/or (b) other people generate new useful discussions of those things. Ideally applied to longtermism specifically, but general discussions—or general discussions plus a quick explanation of the relevance—seems useful too.
I definitely don’t mean to imply with this post that I see strong longtermism as clearly true; I’m just quickly countering a specific set of misconceptions and objections.
As I mentioned in my other reply, I don’t see as much value in responding to weak-man claims here on the forum, but agree that they can be useful more generally.
Regarding “secondary uncertainty, value of information, and similar issues,” I’d be happy to point to sources that are relevant on these topics generally, especially Morgan and Henrion’s “Uncertainty,” which is a general introduction to some of these ideas, and my RAND dissertation chairs work on policy making under uncertainty, focused on US DOD decisions, but applicable more widely. Unfortunately, I haven’t put together my ideas on this, and don’t know that anyone at GPI has done so either—but I do know that they have engaged with several people at RAND who do this type of work, so it’s on their agenda.
This is true, but seems to be responding to tone rather than the substance of the argument. And given that (I think) we’re interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.
The actual issue that is relevant here, which isn’t well identified, is that naive expected value fails in a number of ways. Some of these are legitimate criticisms, albeit not well formulated in the paper. Specifically I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.
That’s roughly true for me saying “In any case, “utterly oblivious” seems to me to be both a rude phrasing and a strong claim.”
But I don’t think it’s true for my comment as a whole. Masrani makes specific claims here, and the claims are inaccurate.
I think steelmanning is often really useful. But I think there’s also valuing in noticing when a person/post/whatever is just actually incorrect about something, and in trying to understand what arguments they’re actually making. Some reasons:
Something like epistemic spot-checking / combatting something like Gell-Mann Amnesia
Making it less likely that other people walk away remembering the incorrect claim as actually true
Prioritising which arguments/criticisms to bother engaging with
We obviously shouldn’t choose arguments at random from the entire pool of available arguments in the world, or the entire pool of available arguments on a given topic. It’s probably often more efficient to engage with arguments that are already quite strong, rather than steelmanning less strong arguments that we happen to have stumbled upon
So here I’m actually not solely interested in the substantive questions raised by Masrani’s post, but also in countering misconceptions that I think the post may have generated, and giving indications of why I think people might find it more useful to engage with other criticisms of longtermism instead (e.g., the ones linked to in the body of my post itself).
One final thing worth noting is that this was a quickly produced post adapting notes I’d made anyway. I do think that if I’d spent quite a while on this, it’d be fair to say “Why didn’t you just talk about the best arguments against longtermism, and the points missing from Greaves & MacAskill, instead?”
Yeah, I imagine there are many things in this vicinity that Greaves & MacAskill didn’t cover yet that are relevant to the case for strong longtermism or how to implement it in practice, and I’d be happy to see (a) recommendations of sources where those things are discussed well, and/or (b) other people generate new useful discussions of those things. Ideally applied to longtermism specifically, but general discussions—or general discussions plus a quick explanation of the relevance—seems useful too.
I definitely don’t mean to imply with this post that I see strong longtermism as clearly true; I’m just quickly countering a specific set of misconceptions and objections.
As I mentioned in my other reply, I don’t see as much value in responding to weak-man claims here on the forum, but agree that they can be useful more generally.
Regarding “secondary uncertainty, value of information, and similar issues,” I’d be happy to point to sources that are relevant on these topics generally, especially Morgan and Henrion’s “Uncertainty,” which is a general introduction to some of these ideas, and my RAND dissertation chairs work on policy making under uncertainty, focused on US DOD decisions, but applicable more widely. Unfortunately, I haven’t put together my ideas on this, and don’t know that anyone at GPI has done so either—but I do know that they have engaged with several people at RAND who do this type of work, so it’s on their agenda.