Ultra near termism doesn’t mean stopping AI. The ultra near termist would really really like a FAI FOOM that simulates a billion subjective years of utopia in the next few days.
Of course, they would choose a 0.1% chance of this FAI FOOM now, as opposed to a 100% chance of FAI FOOM in 2 months.
If a FAI with nanotech could make simulations that are X times faster, and Y times nicer than the status quo, you should encourage AI so long as you think it has at least 1 in XY chance of being friendly.
One thing that potentially breaks ultra-near-termism is the possibility of timetravel. If you have a consistent discount rate over all time, this implies a goal of taking a block of edumonium back in time to the big bang. With pretty much any other goal having negligible utility in comparison. If you consider the past to have no value, then the goal would be to send a block of edumonium back in time to now.
Ultra near termism doesn’t mean stopping AI. The ultra near termist would really really like a FAI FOOM that simulates a billion subjective years of utopia in the next few days.
Of course, they would choose a 0.1% chance of this FAI FOOM now, as opposed to a 100% chance of FAI FOOM in 2 months.
If a FAI with nanotech could make simulations that are X times faster, and Y times nicer than the status quo, you should encourage AI so long as you think it has at least 1 in XY chance of being friendly.
One thing that potentially breaks ultra-near-termism is the possibility of timetravel. If you have a consistent discount rate over all time, this implies a goal of taking a block of edumonium back in time to the big bang. With pretty much any other goal having negligible utility in comparison. If you consider the past to have no value, then the goal would be to send a block of edumonium back in time to now.