I have read this list. Which element on the list is 1 percent as powerful as a nuclear weapon and not a capability we have a substitute for?
I already responded on nuclear power.
Geoengineering : there is a substantial recent shift on this as climate change has proven to be real and emissions cuts slow. There is a substantial chance it will be done and âunpausedâ.
Nanotechnology probably has agi as a required precursor tech.
Vaccine challenge trials extend the lives of elderly citizens, this is not âdisempower your rivalsâ level of power.
Airships are not restricted in any way they donât work. Helium shortages, high loss rate to wind, low payload, uncompetitive vs alternatives like trains, trucks, ships. Why is this on the list?
This list is a very weak argument. Each element is not a temptation because each element either doesnât pencil in so there is no pressure to develop it, or is in fact in use.
By âpencil inâ I mean create economic value vs an alternative. Current llms can do 20 minutes of work you might pay someone $50/âhour for in 10 seconds for 5 cents.
That page lists the value of vaccine challenge trials as $10^12-10^13, which is substantially more than the market capitalization of the three big AI labs combined.
(I think there is a decent case that society is undervaluing those companies, but the relevant question seems to be their actual valuation, not what value they theoretically should have. I feel fairly confident that if you asked the average American whether they would prefer to have had a vaccine for COVID one year earlier versus GPT 3 one year earlier, they would prefer the vaccine.)
I donât disagree with your valuations viewed from a gods eye view.
But you need to look at where the value flow is. An AI company can sell $20 worth of human labor for $1 and keep that $1. They can probably sell for more than that as stronger models are natural monopolies.
A vaccine company doesnât get to charge the real value for the vaccine.
In addition like I said, thereâs the arms race mechanic. If country A develops vaccines with challenge trials and country B does not, and assume B canât just buy vaccine access, country A has a slightly older population, more expendituresâitâs not a gain to the government.
If country A has AGI, and by AGI I mean a system that does what itâs ordered to do at approximately human level across a wide range of tasks, it can depose the government of country B who will be helpless to stop it.
Or just buy all the assets of the country and bribe the governments politicians to force a merge.
Having 10 or 1000 times the resources allows many options. Itâs not remotely an opportunity someone could forgo.
Ben Iâm sorry but your argument is not defensible. Your examples are a joke. Many of them shouldnât even be in the list as they provide zero support for the argument.
This situation looks very much like a nuclear arms race. You win one not by asking your rivals to stop building nukes or to please make them safe but by letting your rival have nuclear accidents and by being ready with your own nukes to attack them.
Same with an AI race. You win by being ready with AI driven weapons to take the offensive on your rivals and any rogue AI they let escape.
I acknowledge that if you had actual empirical evidence that a nuke if used would destroy the planet, which some people claim an AGI willl have these properties, thats a different situation. But you need evidence. Specifically the reason a nuke wonât destroy the planet is atmospheric gas is not very fusable and is low pressure. In the AGI case there needs to be similar conditionsâthere have to be enough insecure computers on the planet for the AGI to occupy, enough insecure financial assets or robotics for the agi to manipulate the world, or intelligenceâwhich itself needs massive amounts of computeâneeds to be so useful at high levels that the AGI can substitute for some inputs.
You need evidence for this. Otherwise we canât do anything at all as a species in fear something we do might end us. Anyone could make up any plausible sounding doomsday scenario they like, and convince others, and we would as a species be paralyzed in fear.
In the AGI case there needs to be similar conditionsâthere have to be enough insecure computers on the planet for the AGI to occupy, enough insecure financial assets or robotics for the agi to manipulate the world
All of these seem true, with the exception that robots arenât neededâthere are already plenty of humans (the majority?) that can be manipulated with GPT-4-level generated text.
or intelligenceâwhich itself needs massive amounts of computeâneeds to be so useful at high levels that the AGI can substitute for some inputs.
The AI can gain access to the massive amounts of compute via the insecure computers and insecure financial resources.
You need evidence for this.
There are already plenty of sound theoretical arguments and some evidence for things like specification gaming, goal misgeneralisation and deception in AI models. How do you propose we get sufficient empirical evidence for AI takeover short of an actual AI takeover or global catastrophe?
actual empirical evidence that a nuke if used would destroy the planet
How would you get this short of destroying the planet? The Trinity test went ahead based on theoretical calculations showing that it couldnât happen, but arguably nowhere near enough of them, given the stakes!
But with AGI, half of the top scientists think thereâs a 10% chance it will destroy the world! I donât think the Trinity test wouldâve gone ahead in similar circumstances.
-----------------------------------
Ben Iâm sorry but your argument is not defensible. Your examples are a joke. Many of them shouldnât even be in the list as they provide zero support for the argument.
Downvoted your comment for itâs hostility and tone. This isnât X (Twitter).
Itâs the same reason you couldnât blow up the atmosphere. If you need several trillion weights for human level intelligence and all modalities, or at least 10 percent of the memory in a human brain, and you need to send the partial tensors between cards (I work on accelerator software presently), nobody not an AI lab has enough hardware. Distributed computers separated by Internet links are useless.
It is possible that Mooreâs law, if it were to continue approximately 30 more years, could lead to the hardware being common, but that has not happened yet.
This may not be X but I reasoned the information given as evidence was fraudulent. Ben may be well meaning but Ben is trying to disprove basic primate decision making that allowed humans to reach this point with false examples. Itâs an extraordinary claim. (By basic reasoning I mean essentially primates choosing between multiple clubs available to them the best performing weapon)
See this list.
I have read this list. Which element on the list is 1 percent as powerful as a nuclear weapon and not a capability we have a substitute for?
I already responded on nuclear power.
Geoengineering : there is a substantial recent shift on this as climate change has proven to be real and emissions cuts slow. There is a substantial chance it will be done and âunpausedâ.
Nanotechnology probably has agi as a required precursor tech.
Vaccine challenge trials extend the lives of elderly citizens, this is not âdisempower your rivalsâ level of power.
Airships are not restricted in any way they donât work. Helium shortages, high loss rate to wind, low payload, uncompetitive vs alternatives like trains, trucks, ships. Why is this on the list?
This list is a very weak argument. Each element is not a temptation because each element either doesnât pencil in so there is no pressure to develop it, or is in fact in use.
By âpencil inâ I mean create economic value vs an alternative. Current llms can do 20 minutes of work you might pay someone $50/âhour for in 10 seconds for 5 cents.
That page lists the value of vaccine challenge trials as $10^12-10^13, which is substantially more than the market capitalization of the three big AI labs combined.
(I think there is a decent case that society is undervaluing those companies, but the relevant question seems to be their actual valuation, not what value they theoretically should have. I feel fairly confident that if you asked the average American whether they would prefer to have had a vaccine for COVID one year earlier versus GPT 3 one year earlier, they would prefer the vaccine.)
I donât disagree with your valuations viewed from a gods eye view.
But you need to look at where the value flow is. An AI company can sell $20 worth of human labor for $1 and keep that $1. They can probably sell for more than that as stronger models are natural monopolies.
A vaccine company doesnât get to charge the real value for the vaccine.
In addition like I said, thereâs the arms race mechanic. If country A develops vaccines with challenge trials and country B does not, and assume B canât just buy vaccine access, country A has a slightly older population, more expendituresâitâs not a gain to the government.
If country A has AGI, and by AGI I mean a system that does what itâs ordered to do at approximately human level across a wide range of tasks, it can depose the government of country B who will be helpless to stop it.
Or just buy all the assets of the country and bribe the governments politicians to force a merge.
Having 10 or 1000 times the resources allows many options. Itâs not remotely an opportunity someone could forgo.
Ben Iâm sorry but your argument is not defensible. Your examples are a joke. Many of them shouldnât even be in the list as they provide zero support for the argument.
This situation looks very much like a nuclear arms race. You win one not by asking your rivals to stop building nukes or to please make them safe but by letting your rival have nuclear accidents and by being ready with your own nukes to attack them.
Same with an AI race. You win by being ready with AI driven weapons to take the offensive on your rivals and any rogue AI they let escape.
I acknowledge that if you had actual empirical evidence that a nuke if used would destroy the planet, which some people claim an AGI willl have these properties, thats a different situation. But you need evidence. Specifically the reason a nuke wonât destroy the planet is atmospheric gas is not very fusable and is low pressure. In the AGI case there needs to be similar conditionsâthere have to be enough insecure computers on the planet for the AGI to occupy, enough insecure financial assets or robotics for the agi to manipulate the world, or intelligenceâwhich itself needs massive amounts of computeâneeds to be so useful at high levels that the AGI can substitute for some inputs.
You need evidence for this. Otherwise we canât do anything at all as a species in fear something we do might end us. Anyone could make up any plausible sounding doomsday scenario they like, and convince others, and we would as a species be paralyzed in fear.
All of these seem true, with the exception that robots arenât neededâthere are already plenty of humans (the majority?) that can be manipulated with GPT-4-level generated text.
The AI can gain access to the massive amounts of compute via the insecure computers and insecure financial resources.
There are already plenty of sound theoretical arguments and some evidence for things like specification gaming, goal misgeneralisation and deception in AI models. How do you propose we get sufficient empirical evidence for AI takeover short of an actual AI takeover or global catastrophe?
How would you get this short of destroying the planet? The Trinity test went ahead based on theoretical calculations showing that it couldnât happen, but arguably nowhere near enough of them, given the stakes!
But with AGI, half of the top scientists think thereâs a 10% chance it will destroy the world! I donât think the Trinity test wouldâve gone ahead in similar circumstances.
-----------------------------------
Downvoted your comment for itâs hostility and tone. This isnât X (Twitter).
Itâs the same reason you couldnât blow up the atmosphere. If you need several trillion weights for human level intelligence and all modalities, or at least 10 percent of the memory in a human brain, and you need to send the partial tensors between cards (I work on accelerator software presently), nobody not an AI lab has enough hardware. Distributed computers separated by Internet links are useless.
It is possible that Mooreâs law, if it were to continue approximately 30 more years, could lead to the hardware being common, but that has not happened yet.
This may not be X but I reasoned the information given as evidence was fraudulent. Ben may be well meaning but Ben is trying to disprove basic primate decision making that allowed humans to reach this point with false examples. Itâs an extraordinary claim. (By basic reasoning I mean essentially primates choosing between multiple clubs available to them the best performing weapon)