An average North Korean may well think that AGI based on their values would be a great thing to overtake the universe, but most of us would disagree. The view from inside a system is very different than the view from the outside. Orwell spoke of a jackboot on the face of humanity forever. I feel like the EA community are doing their best to avoid that outcome, but I’m not sure major world powers are. Entrenching the power of current world governments is unlikely, in my view, to lead to great outcomes. Perhaps the wild card is a valid choice. More than I want to be a paperclip, I want to live in a world where building a billion humanoid robots is not a legitimate business plan and where AGI development is slowly slowly. That doesn’t seem to be an option. So maybe no control of AGI is better than control by pyschopaths?
Nathan Sidney
Thanks for sharing this. I assume you were struggling with suicidal ideation before becoming a counsellor? I would hate to think that counselling was a factor, but could believe in such a pipeline. How was the quality of your training for counselling? Do you think it prepared you well for the worst situation you found yourself in? I ask because my ex had a bad experience with counselling, but she would have been in the very challenging cohort. Do you have any opinions on using LLM’s for therapeutic conversations on these matters?
For context, I’m thinking of a situation like the paradox of the plankton...
https://en.wikipedia.org/wiki/Paradox_of_the_plankton
″...in which a limited range of resources supports an unexpectedly wide range of plankton species, apparently flouting the competitive exclusion principle, which holds that when two species compete for the same resource, one will be driven to extinction.”
Could ASI political ecology be a similar situation, with human and other biotic agents co-existing happily in a multi ASI-agent ecosystem?
For context I’m thinking of the paradox of the plankton..
https://en.m.wikipedia.org/wiki/Paradox_of_the_plankton
Hi, I’m not sure if I failed to read your post before submitting my own or if it was just good timing.
I’m interested in what multi-agent dynamics mean for an ASI political ecology, and what the fact that ASI agents will need to learn negotiation, compromise and cooperation (as well as Machiavellian strategising) means for human flourishing/survival. I’d like to believe that multi-agent dynamics means humanity might be more likely to be incorporated into the future, but that might just be cope. Thanks for the link, look forward to reading it.
I guess the crux of my snarky comment is that if your only choice for master of the universe is between 2 evil empires, your kinda screwed either way.
As an Australian and therefore beholden to both China and USA, the answer doesn’t seem so clear cut to me. China have what seems to be an aggressive green agenda and a focus on social cohesion/harmony which fades into oppression. They seem to be able to get massive engineering projects completed and don’t seem interested in getting involved in other countries politics via proxy wars. Apparently they’re alright with harvesting the organs of political prisoners.
America puts its self forward as the bastion of freedom but has massive inequality, large prison populations and can’t figure out universal healthcare. Americans are creative, confident and murder each other frequently. Their president is a Christian who loves to grab pussies and dreams of hereditary rule.
My personal preference is to take my chances with unaligned ASI as the thought of either of these circuses being the ringmaster of all eternity is terrifying. I’d much rather be a paper clip than a communist/corporate serf.
That looks like a great resource, thanks.