(Having just read the forum summary so far) I think there’s a bunch of good exploration of arguments here, but I’m a bit uncomfortable with the framing. You talk about “if Maxipok is false”, but this seems to me like a type error. Maxipok, as I understand it, is a heuristic: it’s never going to give the right answer 100% of the time, and the right lens for evaluating it is how often it gives good answers, especially compared to other heuristics the relevant actors might reasonably have adopted.
Quoting from the Bostrom article you link:
At best, maxipok is a rule of thumb or a prima facie suggestion.
It seems to me like when you talk about maxipok being false, you are really positing something like:
Strong maxipok: The domain of applicability of maxipok is broad, so that pretty much all impartial consequentialist actors should adopt it as a guiding principle
Whereas maxipok is a heuristic (which can’t have truth values), strong maxipok (as I’m defining it here) is a normative claim, and can have truth values. I take it that this is what you are mostly arguing against—but I’d be interested in your takes; maybe it’s something subtly different.
I do think that this isn’t a totally unreasonable move on your part. I think Bostrom writes in some ways in support of strong maxipok, and sometimes others have invoked it as though in the strong form. But I care about our collectively being able to have conversations about the heuristic, which is one that I think may have a good amount of value even if strong maxipok is false, and I worry that in conflating them you make it harder for people to hold or talk about those distinctions.
(FWIW I’ve also previously argued against strong maxipok, even while roughly accepting Dichotomy, on the basis that other heuristics may be more effective.)
Because you’ve picked a particularly strong form of Maxipok to argue against, you’re pushed into choosing a particularly strong form of Dichotomy that would be necessary to support it
But I think that this strong form of Dichotomy is relatively implausible to start with
And I would guess that Bostrom at the time of writing the article would not have supported it; certainly the passages you quote feel to me like they’re supporting something weaker
Here’s a weaker form of Dichotomy that I feel much more intuitive sympathy for:
Most things that could be “locked in” such that they have predictable long-term effects on the total value of our future civilization, and move us away from the best outcomes, actually constrain us to worlds which are <10% as good than the worlds without any such lock-in (and would therefore count as existential catastrophes in their own right)
The word “most” is doing work there, and I definitely don’t think it’s absolute (e.g. as you point out, the idea of diving the universe up 50⁄50 between a civilization that will do good things with it and one that won’t); but it could plausibly still be enough to guide a lot of our actions
OK I think I much more strongly object to the frame in this forum post than in the research article—in particular, the research article is clear that it’s substituting in a precisification you call Maxipok for the original principle
But I’m not sure what to make of this substitution! Even when I would have described myself as generally bought into Maxipok, I’m not sure if I would have been willing to sign up to this “precisification”, which it seems to me is much stronger
In particular, your version is a claim about the existence of actions which are (close to) the best in various ways; whereas in order to discard Maxipok I would have wanted not just an existence proof, but practical guidelines for finding better things
You do provide some suggestions for finding better things (which is great), but you don’t directly argue that trying to pursue those would be better in expectation than trying to follow Maxipok (or argue about in which cases it would be better)
This makes me feel that there’s a bit of a motte-and-bailey: you’ve set up a particular strong precisification of Maxipok (that it’s not clear to me e.g. Bostrom would have believed at the time of writing the paper you are critiquing); then you argue somewhat compellingly against it; then you conclude that it would be better if you people did {a thing you like but haven’t really argued for} instead
(Having just read the forum summary so far) I think there’s a bunch of good exploration of arguments here, but I’m a bit uncomfortable with the framing. You talk about “if Maxipok is false”, but this seems to me like a type error. Maxipok, as I understand it, is a heuristic: it’s never going to give the right answer 100% of the time, and the right lens for evaluating it is how often it gives good answers, especially compared to other heuristics the relevant actors might reasonably have adopted.
Quoting from the Bostrom article you link:
It seems to me like when you talk about maxipok being false, you are really positing something like:
Whereas maxipok is a heuristic (which can’t have truth values), strong maxipok (as I’m defining it here) is a normative claim, and can have truth values. I take it that this is what you are mostly arguing against—but I’d be interested in your takes; maybe it’s something subtly different.
I do think that this isn’t a totally unreasonable move on your part. I think Bostrom writes in some ways in support of strong maxipok, and sometimes others have invoked it as though in the strong form. But I care about our collectively being able to have conversations about the heuristic, which is one that I think may have a good amount of value even if strong maxipok is false, and I worry that in conflating them you make it harder for people to hold or talk about those distinctions.
(FWIW I’ve also previously argued against strong maxipok, even while roughly accepting Dichotomy, on the basis that other heuristics may be more effective.)
On Dichotomy:
Because you’ve picked a particularly strong form of Maxipok to argue against, you’re pushed into choosing a particularly strong form of Dichotomy that would be necessary to support it
But I think that this strong form of Dichotomy is relatively implausible to start with
And I would guess that Bostrom at the time of writing the article would not have supported it; certainly the passages you quote feel to me like they’re supporting something weaker
Here’s a weaker form of Dichotomy that I feel much more intuitive sympathy for:
Most things that could be “locked in” such that they have predictable long-term effects on the total value of our future civilization, and move us away from the best outcomes, actually constrain us to worlds which are <10% as good than the worlds without any such lock-in (and would therefore count as existential catastrophes in their own right)
The word “most” is doing work there, and I definitely don’t think it’s absolute (e.g. as you point out, the idea of diving the universe up 50⁄50 between a civilization that will do good things with it and one that won’t); but it could plausibly still be enough to guide a lot of our actions
Looking at the full article:
OK I think I much more strongly object to the frame in this forum post than in the research article—in particular, the research article is clear that it’s substituting in a precisification you call Maxipok for the original principle
But I’m not sure what to make of this substitution! Even when I would have described myself as generally bought into Maxipok, I’m not sure if I would have been willing to sign up to this “precisification”, which it seems to me is much stronger
In particular, your version is a claim about the existence of actions which are (close to) the best in various ways; whereas in order to discard Maxipok I would have wanted not just an existence proof, but practical guidelines for finding better things
You do provide some suggestions for finding better things (which is great), but you don’t directly argue that trying to pursue those would be better in expectation than trying to follow Maxipok (or argue about in which cases it would be better)
This makes me feel that there’s a bit of a motte-and-bailey: you’ve set up a particular strong precisification of Maxipok (that it’s not clear to me e.g. Bostrom would have believed at the time of writing the paper you are critiquing); then you argue somewhat compellingly against it; then you conclude that it would be better if you people did {a thing you like but haven’t really argued for} instead