How can we do good without taking x-risk into account? If all sentient life on Earth* is destroyed, goodness becomes impossible because there’s no one left to be good to.
*some existential risks, like takeover by an unsafe, superintelligent AGI may extend beyond Earth on a cosmic time scale, due to sub-light space travel
In my view, we ought to show humility in our ability to accurately forecast risk from any one domain beyond the five-year window. There’s little evidence to suggest anyone is better than random chance at making accurate forecasts beyond a certain time horizon.
At the core of SBF’s grift was the notion that stealing money from Canadian pensioners was justified if that money was spent reducing the odds of an x-event. After all, the end of humanity in the next century would eliminate trillions of potential lives, so some short-term suffering today is easily tolerated. Simple utilitarian logic would dictate that we should sacrifice well-being today if we can prove that those resources have a positive expected value.
I think anyone making an extraordinary claim needs equally extraordinary evidence to back it. That doesn’t mean x-risk isn’t theoretically possible.
Let’s put it this way—if I said there was a chance that life on Earth could be wiped out by an asteroid, few would argue against it since we know the base rate of asteroids hitting Earth is non-zero. We could argue about the odds of getting hit by an asteroid in any given year, but we wouldn’t argue over the very notion. And we could similarly argue over the right level of funding to scan deep space for potential Earth-destroyers, though we wouldn’t argue over the merits of the enterprise in the first place.
That is very different than me claiming with confidence that there is a 25% chance humanity perishes from an asteroid in the next 20 years. And along with this claim I recommend we stop all infrastructure projects globally and direct resources to interstellar lifeboats. You’d rightly ask for concrete evidence in the face of this claim.
I like humility. I wish AI advocates had more of it too. I agree that forecasting risk beyond five years is hard. It is the burden of advocates to demonstrate that what they want to do has acceptable risks of harm over the 10 to 100 year period, not skeptics’ burden to prove non-safety or non-benefience.
Exactly, the burden of proof lies with those who make the claim.
I hope EA is able to get back to the basics of doing the most real-world good with limited resources rather than utilitarian nonsense of saving trillions of theoretical future humans.
It’s not utilitarian nonsense to think about large numbers of loved ones. There are trillions of fish in the oceans, and we have the chance to make their lives so much better!
Agreed! My comment was as aimed at the absurd conclusions one makes when weighing the tradeoffs we make today against trillions of unborn humans. That logic leads to extreme positions.
How can we do good without taking x-risk into account? If all sentient life on Earth* is destroyed, goodness becomes impossible because there’s no one left to be good to.
*some existential risks, like takeover by an unsafe, superintelligent AGI may extend beyond Earth on a cosmic time scale, due to sub-light space travel
In my view, we ought to show humility in our ability to accurately forecast risk from any one domain beyond the five-year window. There’s little evidence to suggest anyone is better than random chance at making accurate forecasts beyond a certain time horizon.
At the core of SBF’s grift was the notion that stealing money from Canadian pensioners was justified if that money was spent reducing the odds of an x-event. After all, the end of humanity in the next century would eliminate trillions of potential lives, so some short-term suffering today is easily tolerated. Simple utilitarian logic would dictate that we should sacrifice well-being today if we can prove that those resources have a positive expected value.
I think anyone making an extraordinary claim needs equally extraordinary evidence to back it. That doesn’t mean x-risk isn’t theoretically possible.
Let’s put it this way—if I said there was a chance that life on Earth could be wiped out by an asteroid, few would argue against it since we know the base rate of asteroids hitting Earth is non-zero. We could argue about the odds of getting hit by an asteroid in any given year, but we wouldn’t argue over the very notion. And we could similarly argue over the right level of funding to scan deep space for potential Earth-destroyers, though we wouldn’t argue over the merits of the enterprise in the first place.
That is very different than me claiming with confidence that there is a 25% chance humanity perishes from an asteroid in the next 20 years. And along with this claim I recommend we stop all infrastructure projects globally and direct resources to interstellar lifeboats. You’d rightly ask for concrete evidence in the face of this claim.
The latter is what I hear from AGI alarmists.
I like humility. I wish AI advocates had more of it too. I agree that forecasting risk beyond five years is hard. It is the burden of advocates to demonstrate that what they want to do has acceptable risks of harm over the 10 to 100 year period, not skeptics’ burden to prove non-safety or non-benefience.
Exactly, the burden of proof lies with those who make the claim.
I hope EA is able to get back to the basics of doing the most real-world good with limited resources rather than utilitarian nonsense of saving trillions of theoretical future humans.
It’s not utilitarian nonsense to think about large numbers of loved ones. There are trillions of fish in the oceans, and we have the chance to make their lives so much better!
https://reducing-suffering.org/how-many-wild-animals-are-there/#Fish
Agreed! My comment was as aimed at the absurd conclusions one makes when weighing the tradeoffs we make today against trillions of unborn humans. That logic leads to extreme positions.