I agree with what you’ve said about how AI safety principles could give us a false sense of security. Risk compensation theory has shown how we reduce our cautiousness with several technologies when we think we’ve created more safety mechanisms.
I also agree with what you’ve said about how it’s likely that we’ll continue to develop more and more technological capabilities, even if this is dangerous. That seems pretty reasonable given the complex economic system that funds these technologies.
That said, I don’t agree with the dystopian/doomsday connotation of some of your words: “Given that we’ve proven incapable of understanding this in the seventy years since Hiroshima, it’s not likely we will learn it” or “Human beings are not capable of successfully managing ever more power at an ever accelerating rate without limit forever. Here’s why. We are not gods.”
In particular, I don’t believe that communicating with such tones has very useful implications. Compared to more specific analysis (for example) of the safety benefits vs. risk compensation cost of particular AI safety techniques.
Respectfully, I believe that facts and evidence support a doomsday scenario. As example, every human civilization ever created has eventually collapsed. Anyone proposing this won’t happen to our current civilization too would bear a very heavy burden.
The accelerating knowledge explosion we’re experiencing is built upon the assumption that human beings can manage any amount of knowledge and power delivered at any rate. This is an extremely ambitious claim, especially when we reflect on the thousands of massive hydrogen bombs we currently have aimed down our own throats.
To find optimism in this view we have to shift our focus to much longer time scales. What I see coming is similar to what happened to the Roman Empire. That empire collapsed from it’s own internal contradictions, a period of darkness followed, and then a new more advanced civilization emerged from the ashes.
Apologies, I really have no complaint with your article, which seems very intelligently presented. But I will admit I believe your focus to be too narrow. That is debatable of course, and counter challenges are sincerely welcomed.
Again, I agree with you regarding the reality that every civilisation has eventually collapsed. I personally also agree that it doesn’t currently seem likely that our ‘modern globalised’ civilisation won’t collapse, though I’m no expert on the matter.
I have no particular insight about how comparable the collapse of the Roman Empire is to the coming decades of human existence.
I agree that amidst all the existential threats to humankind, the content of this article is quite narrow.
Apologies, it’s really not my intent to hijack your thread. I do hope that others will engage you on the subject you’ve outlined in your article. I agree I should probably write my own article about my own interests.
I can’t seem to help it. Every time I see an article about managing AI or genetic engineering etc I feel compelled to point out that the attempt to manage such emerging technologies one by one by one is a game of wack-a-mole that we are destined to lose. Not only is the scale of powers involved in these technologies vast, but ever more of them, of ever greater power, will come online at an ever accelerating pace.
What I hope to see more of are articles which address how we might learn to manage the machinery creating all these multiplying threats, the ever accelerating knowledge explosion.
Ok, I’ll leave it there, and get to work writing my own articles, which I hope you might challenge in turn.
I agree with what you’ve said about how AI safety principles could give us a false sense of security. Risk compensation theory has shown how we reduce our cautiousness with several technologies when we think we’ve created more safety mechanisms.
I also agree with what you’ve said about how it’s likely that we’ll continue to develop more and more technological capabilities, even if this is dangerous. That seems pretty reasonable given the complex economic system that funds these technologies.
That said, I don’t agree with the dystopian/doomsday connotation of some of your words: “Given that we’ve proven incapable of understanding this in the seventy years since Hiroshima, it’s not likely we will learn it” or “Human beings are not capable of successfully managing ever more power at an ever accelerating rate without limit forever. Here’s why. We are not gods.”
In particular, I don’t believe that communicating with such tones has very useful implications. Compared to more specific analysis (for example) of the safety benefits vs. risk compensation cost of particular AI safety techniques.
Thanks much for your engagement Madhav.
Respectfully, I believe that facts and evidence support a doomsday scenario. As example, every human civilization ever created has eventually collapsed. Anyone proposing this won’t happen to our current civilization too would bear a very heavy burden.
The accelerating knowledge explosion we’re experiencing is built upon the assumption that human beings can manage any amount of knowledge and power delivered at any rate. This is an extremely ambitious claim, especially when we reflect on the thousands of massive hydrogen bombs we currently have aimed down our own throats.
To find optimism in this view we have to shift our focus to much longer time scales. What I see coming is similar to what happened to the Roman Empire. That empire collapsed from it’s own internal contradictions, a period of darkness followed, and then a new more advanced civilization emerged from the ashes.
Apologies, I really have no complaint with your article, which seems very intelligently presented. But I will admit I believe your focus to be too narrow. That is debatable of course, and counter challenges are sincerely welcomed.
Again, I agree with you regarding the reality that every civilisation has eventually collapsed. I personally also agree that it doesn’t currently seem likely that our ‘modern globalised’ civilisation won’t collapse, though I’m no expert on the matter.
I have no particular insight about how comparable the collapse of the Roman Empire is to the coming decades of human existence.
I agree that amidst all the existential threats to humankind, the content of this article is quite narrow.
Apologies, it’s really not my intent to hijack your thread. I do hope that others will engage you on the subject you’ve outlined in your article. I agree I should probably write my own article about my own interests.
I can’t seem to help it. Every time I see an article about managing AI or genetic engineering etc I feel compelled to point out that the attempt to manage such emerging technologies one by one by one is a game of wack-a-mole that we are destined to lose. Not only is the scale of powers involved in these technologies vast, but ever more of them, of ever greater power, will come online at an ever accelerating pace.
What I hope to see more of are articles which address how we might learn to manage the machinery creating all these multiplying threats, the ever accelerating knowledge explosion.
Ok, I’ll leave it there, and get to work writing my own articles, which I hope you might challenge in turn.