Leverage points for a pause

What are ways to prevent development of dangerous AI?

When I started on this question two years ago, I expected that passing laws to ban dangerous architectures was the way to go. Then I learned about many new ways from other concerned communities. It was overwhelming.

Here’s a four-level framework I found helpful for maintaining an overview.


Four things need to be available to scale AI:

  1. Data (inputs received from the world)

  2. Work (functioning between domains)

  3. Uses (outputs expressed to the world)

  4. Hardware (computation of inputs into outputs)

At each level, AI gets scaled from extracted resources:

  1. Machine programs searched-for data into code to predict more data.

  2. Workers design this machine to cheaply automate out more workers.

  3. Corporations sink profit into working machines for more profitable uses.

  4. Markets produce infrastructure for the production of more machines.

At each level, AI scaling is increasingly harming people:

  1. Disconnected person
    bots feed on our online data to spread fake posts between persons.

  2. Dehumanised workplace
    bots act as coworkers until robots sloppily automate our workplace.

  3. Destabilised society
    robot products are hyped up and misused everywhere over society.

  4. Destroyed environment
    robots build more machines that slurp energy and pollute nature.

Communities are stepping up now to stop harmful AI. You can support their actions. For example, you can fund lawsuits by creatives and privacy advocates to protect their data rights. Or give media support for unions to negotiate contracts so workers aren’t forced to use AI. Or advocate for auditors having the power to block unsafe AI products.


Over the long term, our communities can work towards comprehensive restrictions:

  1. Digital surveillance ban
    no machine takes input data from us, or from any spaces we are in, without our free express consent.

  2. Multi-job robot ban
    no machine learns more than one job function and only then with workers’ free express consent.

  3. Autonomous use ban
    no machine outputs to where we live, if not tested and steered by local humans in the loop.

  4. Excess hardware ban
    no machine can process more than just the data humans curate for scoped uses.

I noticed there are ways to prevent harms and risks at the same time. Communities with diverse worldviews can act in parallel – to restrict how much data, work, uses, and hardware is available for scaling AI. While hard, it’s possible to pause AI indefinitely.

Crossposted from LessWrong (3 points, 0 comments)
No comments.