OpenAI or META generally pays their workers pretty well
Yes for employed tech workers. But OpenAI and Meta also rely on gigs/outsourcing to a much larger number of data workers, who are underpaid.
there are enough ML researchers unconcerned with Xrisk that there is little need to deceive them
That’s fair in terms of AI companies being able to switch to employing those researchers instead.
Particularly at OpenAI though, it seems half or more ML researchers now areconcerned about AI x-risk, and were kinda enticed in by leaders and HR to work on a beneficial AGI vision (that by my controllability research cannot pan out). Google and Meta have promoted their share of idealistic visions, that similarly seem misaligned with what those corporations are working toward.
A question is how much an ML researcher whistleblowing by releasing internal documents could lead to negative public opinion on an AI company and/or lead to a tightened regulatory response.
for dealing with AI that might abruptly become more dangerous, especially if it did so in dev rather than prod.
Makes sense if you put credence on that scenario. IMO it does not make sense given how the model’s functioning must be integrative/navigating of the greater physical complexity of model components interacting with larger outside contexts.
A geothermal or nuclear powered AI is no less dangerous than a coal powered one,
Agreed here, and given the energy-intensiveness of computing ML models (vs. estimate for “flops” in human brains), if we allow to corporations gradually run more autonomously, it makes sense for those corporations to scale up nuclear power.
Next to direct CO2 emissions of computation, other aspects would concern environmentalists. I used compute as a shorthand, but would include all chemical pollution and local environmental destruction across the operation and production lifecycles of the hardware infrastructure.
Essentially, the artificial infrastructure is itself toxic. At the current scale, we are not noticing the toxicity much given that it is contained within facilities and/or diffuse in its flow-through effects.
I wrote this for a lay audience:
we miss the most diffuse harm – to our environment. Training a model can gobble up more electricity than 100 US homes use in one year. A data center slurps water too – millions of liters a day, as locals undergodrought. Beside carbon emissions, hundreds of cancerous chemicals are released during mining, production and recycling.
Environmentalists see now how a crypto-boom slurped ~0.5% of US energy. But crypto-currencies go bust, since they produce little value of their own. AI models, on the other hand, are used to automate economic production.
Big Tech extracts profit using AI-automated applications, to reinvest in more toxic hardware factories. To install more hardware in data centers slurping more water and energy. To compute more AI code. To extract more profit.
This is a vicious cycle. After two centuries of companies scaling resource-intensive tech, we are near societal collapse. Companies now scale AI tech to automate companies scaling tech. AI is the mother of all climate catastrophes.
There are currently basically zero restrictions on model scaling, so it’s hard to see how regulation could make this worse.
AI company leaders are anticipating, reasonably, that they are going to get regulated.
I would not compare against the reference of how much model scaling is unrestricted now, but against the counterfactual of how much model scaling would else be restricted in the future. If AI companies manage to shift policy focus toward legit-seeming risk regulations that fail at restricting continued reckless scaling of training and deployment, I would count that as a loss.
Lawsuits can set precedent, so losing them is not zero cost. For example, Lina Khan’s frivolous lawsuits against tech companies has weakened the FTC’s ability to pursue aggressive antitrust policy because she keeps losing and setting precedents that restrict the FTC in the future.
Strong point. Agreed.
Here is another example I mentioned in the project details:
We want to prepare the EU case rigorously, rather than file fast as in happened before in the US. The Stable Diffusion case, which Alex Champandard now advises, previously made technical mistakes (eg. calling outputs “mosaics”). In US courts, dismissal can set a precedent.
While under EU civil law, courts do not set legal precedents, in practice a judge will still look back at judges’ decisions in previous cases.
Thank you for the thoughts!
Yes for employed tech workers. But OpenAI and Meta also rely on gigs/outsourcing to a much larger number of data workers, who are underpaid.
That’s fair in terms of AI companies being able to switch to employing those researchers instead.
Particularly at OpenAI though, it seems half or more ML researchers now are concerned about AI x-risk, and were kinda enticed in by leaders and HR to work on a beneficial AGI vision (that by my controllability research cannot pan out). Google and Meta have promoted their share of idealistic visions, that similarly seem misaligned with what those corporations are working toward.
A question is how much an ML researcher whistleblowing by releasing internal documents could lead to negative public opinion on an AI company and/or lead to a tightened regulatory response.
Makes sense if you put credence on that scenario.
IMO it does not make sense given how the model’s functioning must be integrative/navigating of the greater physical complexity of model components interacting with larger outside contexts.
Agreed here, and given the energy-intensiveness of computing ML models (vs. estimate for “flops” in human brains), if we allow to corporations gradually run more autonomously, it makes sense for those corporations to scale up nuclear power.
Next to direct CO2 emissions of computation, other aspects would concern environmentalists.
I used compute as a shorthand, but would include all chemical pollution and local environmental destruction across the operation and production lifecycles of the hardware infrastructure.
Essentially, the artificial infrastructure is itself toxic. At the current scale, we are not noticing the toxicity much given that it is contained within facilities and/or diffuse in its flow-through effects.
I wrote this for a lay audience:
we miss the most diffuse harm – to our environment. Training a model can gobble up more electricity than 100 US homes use in one year. A data center slurps water too – millions of liters a day, as locals undergo drought. Beside carbon emissions, hundreds of cancerous chemicals are released during mining, production and recycling.
Environmentalists see now how a crypto-boom slurped ~0.5% of US energy.
But crypto-currencies go bust, since they produce little value of their own.
AI models, on the other hand, are used to automate economic production.
Big Tech extracts profit using AI-automated applications, to reinvest in more toxic hardware factories. To install more hardware in data centers slurping more water and energy. To compute more AI code. To extract more profit.
This is a vicious cycle. After two centuries of companies scaling resource-intensive tech, we are near societal collapse. Companies now scale AI tech to automate companies scaling tech. AI is the mother of all climate catastrophes.
AI company leaders are anticipating, reasonably, that they are going to get regulated.
I would not compare against the reference of how much model scaling is unrestricted now, but against the counterfactual of how much model scaling would else be restricted in the future.
If AI companies manage to shift policy focus toward legit-seeming risk regulations that fail at restricting continued reckless scaling of training and deployment, I would count that as a loss.
Strong point. Agreed.
Here is another example I mentioned in the project details:
We want to prepare the EU case rigorously, rather than file fast as in happened before in the US. The Stable Diffusion case, which Alex Champandard now advises, previously made technical mistakes (eg. calling outputs “mosaics”). In US courts, dismissal can set a precedent.
While under EU civil law, courts do not set legal precedents, in practice a judge will still look back at judges’ decisions in previous cases.