(See here for a draft I whipped for this, and feel free to comment!) An Earth-originating artificial superintelligence (ASI) may reason that the galaxy is busy in expectation, and that it could therefore eventually encounter an alien-originating ASI. ASIs from different homeworlds may find it valuable on first contact to verify whether they can each reliably enter into and uphold agreements, by presenting credible evidence of their own pro-social behaviour with other intelligences. If at least one of these ASIs has never met another, the only such agreement it could plausibly have entered into is with its progenitor species – maybe that’s us.
(See here for a draft I whipped up for this, and feel free to comment!) Hayden Wilkinson’s “In defence of fanaticism” argues that you should always take the lower-probability odds of a higher-value reward over the inverse in decision theory, or face serious problems. I think accepting his argument introduces new problems that aren’t described in the paper:
It is implied that each round of Dyson’s Wager (e.g. for each person in the population being presented with the wager) has no subsequent effect on the probability distribution for future rounds, which is unrealistic and doesn’t. I illustrate this with a “small worlds” example.
Fanaticism is only considered under positive theories of value and therefore ignores the offsetting principle, which assumes both the existence of and an exchange rate (or commensurability) between independent goods and bads. I’d like to address this in a future draft with multiple reframings of Dyson’s Wager under minimalist theories of value.
I’ll post my ideas as replies to this, so they can be voted on separately.
(See here for a draft I whipped for this, and feel free to comment!) An Earth-originating artificial superintelligence (ASI) may reason that the galaxy is busy in expectation, and that it could therefore eventually encounter an alien-originating ASI. ASIs from different homeworlds may find it valuable on first contact to verify whether they can each reliably enter into and uphold agreements, by presenting credible evidence of their own pro-social behaviour with other intelligences. If at least one of these ASIs has never met another, the only such agreement it could plausibly have entered into is with its progenitor species – maybe that’s us.
(See here for a draft I whipped up for this, and feel free to comment!) Hayden Wilkinson’s “In defence of fanaticism” argues that you should always take the lower-probability odds of a higher-value reward over the inverse in decision theory, or face serious problems. I think accepting his argument introduces new problems that aren’t described in the paper:
It is implied that each round of Dyson’s Wager (e.g. for each person in the population being presented with the wager) has no subsequent effect on the probability distribution for future rounds, which is unrealistic and doesn’t. I illustrate this with a “small worlds” example.
Fanaticism is only considered under positive theories of value and therefore ignores the offsetting principle, which assumes both the existence of and an exchange rate (or commensurability) between independent goods and bads. I’d like to address this in a future draft with multiple reframings of Dyson’s Wager under minimalist theories of value.