I suppose I’m interested in questions around what is an existential threat. How bad a nuclear winter would it have to be to cause the collapse of society (and how easily could society be rebuilt afterwards). Both require robust models of agriculture in extreme situations and models of energy flows in economies where strategic elements might have been destroyed (to know how easy rebuilding would be). Since pandemic/climate change also have societal collapse as a threat the models needed would apply to them too (they might trigger nuclear exchange or at least loss of control over nuclear reactors, depending upon what societal collapse looks like).
The national risk register is the closest I found, in the public domain. It doesn’t include things like large meteorites, that I found.
WillPearson
[Question] Intellectual property of AI and existential risk in general?
[Question] What is the nature of humans general intelligence and it’s implications for AGI?
[Question] Existential risk management in central government? Where is it?
It’s true that all data and algorithms are biased in some way. But I suppose the question is, is the bias from this less than what you get from human experts, who often have a pay cheque that might lead them to think in a certain way.
I’d imagine that any system would not be trusted implicitly, to start with, but would have to build up a reputation of providing useful predictions.
In terms of implementation, I’m imagining people building complex models of the world, like decision making under deep uncertainty with the AI mainly providing a user friendly interface to ask questions about the model.
[Question] Is working on AI to help democracy a good idea?
Thanks, I did a MSc in this area back in the early 2000s, my system was similar to Tierra, so I’m familiar with evolutionary computation history. Definitely useful context. Learning classifier systems are also interesting to check out for aligning multi-agent evolutionary systems. It definitely informs where I am coming from.
Do you know anyone with this kind of background that might be interested in writing something long form on this? I’m happy to collaborate, but my mental health has not been the best. I might be able to fund this a small bit, if the right person needs it.
Thanks, I’ve had a quick skim of propositions, it does mention perhaps limiting rights of reproduction, but not the conditions under which it should be limited or how it should be controlled.
Another way of framing my question is if natural selection favours ai over humans, what form of selection should we try to put in place for AI. Rights are just part of the the question. Evolutionary dynamics and what is needed by society from AI (and humans) to continue functioning is the major part of the question.
I’ve clarified the question, does it make more sense now?
And if no one is working on it, is there an organisation that would be interested in starting working on it?
[Question] Is anyone working on safe selection pressure for digital minds?
I’ve been thinking a bit around secret efforts in AI safety research.
My current thoughts are around: if it is or does occur what non secret efforts might be needed? E.g. if it develops safe AI media that shows postive outcomes from AI might be needed so that people aren’t overly scared.
Oh and AI policy might be needed too, perhaps limiting certain types of AI (agentic stuff).
How should important ideas around topics like AI and biorisk be shared? Is there a best practice, or government departments that specialise in handling that?
WillPearson’s Quick takes
Hi, I’m thinking about a possibly new approach to AI safety. Call it AI monitoring and safe shutdown.
Safe shutdown, riffs on the idea of the big red button, but adapts it for use in simpler systems. If there was a big red button, who gets to press it and how? This involves talking to law enforcement, legal and policy. Big red buttons might be useful for non learning systems, large autonomous drones and self-driving cars are two system that might suffer from software failings and need to be shutdown safely if possible (or precipitously if the risks from hard shutdown are less than it’s continued operation).
The monitoring side of thing asks what kind of registration and monitoring we should have for AIs and autonomous systems. Building on work on aircraft monitoring, what would the needs around autonomous system be?
Is this a neglected/valuable cause area? If so, I’m at an early stage and could use other people to help out.
I found this report on adaptation, which suggest adaptation with some forethought will be better than waiting for problems to get worse. Talks about things other than crops too. The headlines
Without adaptation, climate change may depress growth in global agriculture yields up to 30 percent by 2050. The 500 million small farms around the world will be most affected.
The number of people who may lack sufficient water, at least one month per year, will soar from 3.6 billion today to more than 5 billion by 2050.
Rising seas and greater storm surges could force hundreds of millions of people in coastal cities from their homes, with a total cost to coastal urban areas of more than $1 trillion each year by 2050.
Climate change could push more than 100 million people within developing countries below the poverty line by 2030. The costs of climate change on people and the economy are clear. The toll on human life is irrefutable. The question is how will the world respond: Will we delay and pay more or plan ahead and prosper?
I’ve been thinking for a while civilisational collapse scenarios impact some of the common assumptions about the expected value of movement building or saving for effective altruism. This has knock on implications to when things are most hingeist.
That said, I personally would be quite surprised if worldwide crop yields actually ended up decreasing by 10-30%. (Not an informed opinion, just vague intuitions about econ).
I hope they won’t too, if we manage to develop the changes we need to make before we need them. Economics isn’t magic
But I wanted to point out that there will probably be costs associated with stopping deaths associated with food shortages with adaptation. Are they bigger or smaller than mitigation by reducing CO2 output or geoengineering?
This case hasn’t been made either way to my knowledge and could help allocate resources effectively.
Are there any states that have committed to doing geoengineering, or even experimenting with geoengineering, if mitigation fails?
Having some publicly stated sufficient strategy would convince me that this was not a neglected area.
Does anyone have recommendations for people I should be following for structural AI risk discussion and possible implications of post-current deep learning AGI systems.