If you grant that MWI is coherent, then I think you should be open to the possibility that it isn’t unique, and there are other hypotheses that suggest possible projects that are much more likely to create massive amounts of value than prevent it.
Actually, I didn’t address your argument from MWI because I suspect we couldn’t make any difference. Maybe I’m wrong (it’s way beyond my expertise), but quantum branching events would be happening all the time, so either (i) there are (or will be) infinite worlds, whatever we do, and then the problem here is more about infinite ethics than Fanaticism, or (ii) there is a limit to the number of possible branches—which I guess will (most likely) be achieved whatever we do. So it’s not clear to me that we would gain additional utility by creating more branching events.
[And yet, the modus ponens of one philosopher is the modus tollens of another one… rat/EAs have actually been discussing the potential implications of weird physics: here, here...]
However, I’m not sure the problem I identified with PW’s (i.e., take opportunity costs seriously) wouldn’t apply here, too… if we are to act conditioned on MWI being true, then we should do the same for every theory that could be true with similar odds. But how strong should this “could” be? Like “we could be living in a simulation”? And how long until you face a “basilisk”, or just someone using motivated reasoning?
As Carl Shulman points out, this might be a case of “applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequence.”
Actually, I didn’t address your argument from MWI because I suspect we couldn’t make any difference. Maybe I’m wrong (it’s way beyond my expertise), but quantum branching events would be happening all the time, so either (i) there are (or will be) infinite worlds, whatever we do, and then the problem here is more about infinite ethics than Fanaticism, or (ii) there is a limit to the number of possible branches—which I guess will (most likely) be achieved whatever we do. So it’s not clear to me that we would gain additional utility by creating more branching events.
[And yet, the modus ponens of one philosopher is the modus tollens of another one… rat/EAs have actually been discussing the potential implications of weird physics: here, here...]
However, I’m not sure the problem I identified with PW’s (i.e., take opportunity costs seriously) wouldn’t apply here, too… if we are to act conditioned on MWI being true, then we should do the same for every theory that could be true with similar odds. But how strong should this “could” be? Like “we could be living in a simulation”? And how long until you face a “basilisk”, or just someone using motivated reasoning?
As Carl Shulman points out, this might be a case of “applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequence.”