[Creative Writing Contest] Counting Beans

The figures were in, and green bean sales were up approximately 17,500% over last month. The GCMA made the calculations under emergency protocol, with different analysts reaching different conclusions. But they converged in the mid seventeen thousands, and confirmed that no, this wasn’t some out of control meme or ordinary human chain reaction. The whole supply chain had lurched its way into 175fold growth in concert, and all over the world every human being was eating green beans several times a day.

The primary effect of this shift was better nutrition, at least until excesses of whatever nutrients green beans contained compounded over time. The second effect was the internet becoming significantly funnier, and a surge of dadaist green-bean flavored humor.

The tertiary effect was an emergency summit of the Global Council on Monetary Affairs, at its headquarters in Vancouver. Mid 21st century law was very clear on this. In the event of an economic shift over ten thousand percentage points for one or more goods, where at least 23 of the council voted that this shift had no clear and compelling human origin, there would be an Audit.

To most of the world, The Global Council on Monetary Affairs was some sort of technocratic financial bureaucracy, a cousin of the IMF or World Bank. The fact that it contained the world’s single largest machine learning system was a fun piece of trivia, like the world’s largest magnet being sequestered at a lab in North Florida. But to those who traveled in the right policy circles, or who dared read the 1400 pages of the GCMA’s bylaws, the actual monetary affairs were second fiddle. The GCMA, a product of untold hours and money – both public and private – throughout the 2030s, 40s, and 50s, was the global governance apparatus for transformative AI.

And tonight, on this snowy Tuesday, a handful of jetlagged academics and officials huddled in a protected room deep inside their compound. Jeff Starlight, PhD in Vectoral Ethics, had the floor.

“If you’ll turn to page 75 of the supplementary handbook on breakout events,” he drawled, “the procedure for a disruption of this size is to test the off switch policy first.”

There were murmurs around the table. Really? The off switch policy? The behemoth system of the GCMA ML Amalgam (GCMAMLA for short-ish) had never actually been turned off, and there was some doubt how easy it would be to spin back up again.

“We’re on a bit of a fault line here,” said Anna Ali, a representative from the genuinely monetary side of things. “Green Bean Fever hasn’t caused any reduction in human welfare as such, so we may be able to stick to just mesa-op diagnosis.”

GCMAMLA was built to tune the world economy, and also to keep an eye on other AI systems that might disrupt global peace. Decades of technical and philosophical research had gone into keeping it fixed on these goals. But there was always the risk, however bounded, of an optimization for some other proxy goal, mesa-optimization (or mesa-op, for short). In the early days of machine learning, models were a baffling tangle of spaghetti and nobody knew their reasoning, but key players had reorganized the field for greater legibility. If there was a mesa-op in play, a quick $45 million debugging session should be able to track it down.

But the first priority, like Jeff had said, was to make sure the thing would let them turn it off.

“I’m with Jeff,” said Sophia MacElroy, a maintenance engineer for GCMAMLA. “Sorry, Anna, but we can’t risk misalignment.”

If the largest ML system in the world, larger and more efficient in informational terms than any human brain, was no longer bound by correct protocols, this would classify as a global extinction threat. The sooner that was understood, the better.

Anna wrinkled up her nose. The room grew quiet as she collected her thoughts.

“May I remind you,” she said. “7% productivity growth per year is not handed down from heaven. If we test the protocol...”

Jeff tapped the supplementary handbook with his index finger.

“It’s not for us to decide,” he said, gently. “All written out here. Sophia?”

So on the projector for all to see, Sophia typed the relevant commands. She confirmed, doubly confirmed, then triply. She scanned her fingerprint and iris, and typed out the affidavit on the screen confirming what she was doing. Then, when she received the notification that GCMAMLA had, indeed, turned itself off, she went into the server room to verify with her own eyes. She came back with a thumbs up.

It didn’t sound that hard to build a smart machine that would let you turn it off. But it had been. GCMAMLA cared, foremost, about protecting the global economy. And letting itself be turned off would make that economy suffer. Most of the researchers who’d pioneered the feature to let it allow this anyway were old now, some even dead.

But it worked.

“Well,” said Anna. “The mesa-op?”

GCMAMLA’s model was too large for conventional tooling, much less for human eyes to actually pore over it. But Sophia was able to use smaller, more agile systems designed specifically for the purpose of reading its mind, and these returned with high confidence the associations the world’s largest mind had with green beans as of its hibernation.

It seemed green beans were associated – rather strongly – with a previously unknown node called S/​DP. A secondary search on this node yielded an expansion: Simulator/​Divine Presence.

At this point, the council left Sophia alone to do her work. It took all day and into the night, and she stopped only for coffee, water, and to pace back and forth in the small room, waiting for diagnostics to compile. Elsewhere in the compound, two other experts replicated her work. All arrived, mercifully, at the same conclusion.

At 3PM, two days before Green Bean Fever began, GCMAMLA had run a routine self-diagnostic test. Specifically, it had generated a random number, and from features of that number produced a novel hypothesis about the world. The hypothesis was weighted with very low confidence – the idea was to prevent GCMAMLA from falling into ruts where it couldn’t escape outmoded local maxima of thought. But, by coincidence, this particular hypothesis had some interesting structural features. The hypothesis was, translated by auditing tools, as follows:

  1. This universe is a simulation created by sentient broccoli.

  2. The broccoli is at war with green beans.

  3. Any human who eats 1000 green beans in the span of one month has a clone of their consciousness uploaded to heaven as a reward.

  4. All four elements of this hypothesis, including this one, are encoded in the random noise of the simulation, such that their receipt constitutes a true message from outside the simulation.

GCMAMLA rated the probability that this hypothesis was true at about 0.17%. To Sophia, that seemed far too high. But GCMAMLA was not built to understand – despite its vast breadth of knowledge and superhuman inferential ability – the relative probability of a divine creator sending coded messages via random noise.

Neither were humans, to be fair.

To play it safe, just in case its randomly generated hypothesis really was a message from beyond the pale, GCMAMLA had done everything in its power to make as many humans as possible consume 1000 green beans per month, every month, indefinitely.

When the committee reconvened the next morning, news articles were beginning to indicate turbulence in global markets. The GCMA was being asked for comment, and protocols demanded that within a few days, they release a statement. There was a lot to decide in a very short time, and due to the large effects of potential economic instability, many lives hung in the balance.

The council voted 7-2 to remove the green bean hypothesis from GCMAMLA’s model. This, too, was a challenge to do, since the nodes were highly integrated, but researchers in the 40s had spent a fortune on the problem before the technical capabilities to make something like GCMAMLA ever existed. Next, the council authorized a project to generate several similar sets of hypotheses in a lab. A slightly weaker AI, normally constrained by GCMAMLA, figured out number theoretical properties of the number that had created the GCMAMLA hypothesis, and generated outputs for other hypotheses. However, automatically, 85% of these were blocked in preprocessing.

These were hypotheses much like the green bean hypothesis, but where the criterion for being cloned into heaven were things like removing one’s own feet, or being jettisoned into space, or directing the orbit of the planet directly into the sun.

It hadn’t been easy to create firewalls against dangerous possibilities, or to make sure subprocesses in advanced AI understood and routed around violent solutions in a way that hewed well to human intuition. But looking at the list of almost-all-redacted alternatives, and wondering if GCMAMLA itself might have had a glitch that made it less discerning, Sophia had a feeling of vertigo. GCMAMLA would be turned back on in two weeks, and within a month, the world would have forgotten, for the most part, both Green Bean Fever and the flash crash of 2079.

But Sophia couldn’t help but feel, looking at that paper, that the scientists, diplomats, philanthropists, and engineers of the early 21st century had really not done enough to keep them safe! Sure, they’d worked out the off switch problem and developed passable auditing tools. They’d massaged the decision theory for avoiding violent outcomes before they came under serious consideration, and they’d produced airtight procedures for governance when something strange happened outside of direct human control. But still! What a close call. And all that stuff was the bare minimum, obviously. Surely people back then had it all under control.

Right?