[Creative Writing Contest] The Puppy Problem

The Puppy Problem was first published in SPACE & TIME MAGAZINE (Issue #140) and republished by METASTELLAR MAGAZINE (August 2021). As a sci-fi author and longtime AI researcher, I wrote this piece to address the familiar AI Control Problem from a very different perspective. I hope you enjoy it...

The Puppy Problem

by Louis B. Rosenberg

There must have been fifty or sixty of us down there, arms around our knees, our backs pressed against the cold cement walls, all listening in silence to the explosions overhead. I didn’t know any of these people but I could sense that most were in denial, refusing to believe this was really the end. I harbored no such delusions – the end was racing towards us, wheels spinning, headlights blazing, and there was nothing we could do but sit there and wait.

I tried to distract myself, focusing instead on the 25 feet of solid concrete that separated us from the chaos above. It was such a simple material, concrete, invented over 3000 years ago by Bedouin tribes, and yet it was the only reason we were still alive. Well, that and the plastic drums of water and beans that someone had the good sense to stockpile.

“It’s the little things that matter,” I whispered to a man sitting beside me, his young daughter asleep in his lap, his wife just a photo in his hands.

He didn’t respond, but his eyes met mine. They were kind eyes, expressing that we were all in this together. Of course, that’s only because he didn’t know who I was. He had no idea that the slaughter above was all my fault. Well, it wasn’t entirely my fault, but the seeds were planted by a decision I made two decades prior – a decision about dogs and cats of all things, raccoons and squirrels too. I’m an animal lover for God’s sake. I should have known better.

We all should have.

Back then, I worked for Open Minds, an activist startup that was founded with the expressed goal of making the world a better place. Our mission was to ensure that all software components that were essential to daily life were made freely available to everyone around the world, thereby preventing any single corporation from controlling our critical infrastructure. It was an idealistic job, most of us earning half the going rate, but nobody complained. Hell, we were proud to work there, doing more good-per-dollar than coders anywhere else in the valley. And besides, this was important work, impactful work, our software touching billions of devices around the globe, from cheap consumer products to massive factories and powerplants.

As for me, I worked on autonomous vehicles.

No, not the glamorous code that put the first wave of self-driving cars on the road, but the cleanup software that came in response to consumer complaints and class action lawsuits. My job was to make minor adjustments to the low-level algorithms, fixing tiny bugs that few people even knew existed. It was grunt work, but I didn’t mind – I liked solving problems and I never felt pressured for time.

That is, until an urgent project landed on my desk hand delivered by my boss. It all stemmed from a high-profile incident involving one dead dog, three crying kids, two angry parents, and a viral video that racked up over a half-billion views.

Euphemistically called “The Puppy Problem” by the New York Times, the facts were surprisingly simple – a playful dog ran into the street and a driverless taxi failed to stop, killing the beloved pet in front of a horrified family. It was all caught on camera, including the gut-wrenching footage of three sobbing kids standing over their furry friend, mashed and mangled.

To be clear, the outrage wasn’t simply because a dog had been hit by a car, as that can be unavoidable, even for the most careful human drivers. No, the outrage was because by all witness accounts, the car could have swerved. Even the elderly couple riding in the taxi testified that while there was no time to stop, there was more than enough room to maneuver. In fact, they said it felt like the car had deliberately lurched towards the dog, not away from it.

“That’s impossible, right?” my boss asked me three times as he paced my tiny office, more stressed than I’d ever seen. I told him to come back later, as I needed time to review the code, every layer from the incoming sensor-array to the underlying control algorithms.

And with that, I got to work.

For the first six hours I was totally stumped. It just seemed like nothing could cause a problem like this. It wasn’t until after midnight that I finally narrowed things down to a single piece of software – an inconspicuous safety routine in the Primary Intelligence Core known simply as the RK Algorithm. I didn’t know how it could have failed, but I was pretty sure it was the only possible culprit. The code had been deployed over a year prior and was already adopted by all major automakers. By our latest estimates, the algorithm was embedded deep within the Autonomous Controllers of over 120 million cars and trucks, with another 200 million vehicles scheduled to receive the code by upgrade over the next few months.

This was not good.

Not good at all.

And to make things worse, my boss was texting me all night, implying that if another “puppy incident” happened before we figured this out, both our jobs would be on the line. The message was clear – I had to find the error and fix it, fast.

Reviewing the data-history, I quickly confirmed that the car’s vision system had worked as designed, detecting the dog with more than enough time to react. I also confirmed that the Intelligence Core had correctly categorized the dog as a living creature to be avoided through immediate evasive action. And yet, the vehicle had failed to swerve. In fact, the data confirmed the witness accounts – the RK Algorithm, which had never caused us any problems before, had deliberately and aggressively aimed the car at the helpless dog!

This was bad.

Really bad.

Without stopping to sleep, I studied the code for another eight hours, thoroughly baffled by how this could happen. There was just nothing in the software that could instruct an autonomous vehicle to swerve directly into a defenseless animal. And yet, that’s what that car did. It seemed so impossible in fact, I checked and double-checked the security logs, thinking the maybe there was a hack or virus or some other malicious act against our software. But that wasn’t the case – the logs were perfectly clean.

There had to be a significant bug, and yet I couldn’t find it.

I was about to tell my boss that I was completely and utterly stumped when I took one last look at the sensor data. That’s when I noticed the one thing that was not visible in the video footage circulating online – the squirrel.

It turns out, the dog had run into the street in pursuit of a wily squirrel, both animals darting in front of the rapidly approaching vehicle. In a split second, the Primary Intelligence Core determined that swerving around the dog would mean hitting the squirrel, while swerving around the squirrel would mean hitting the dog. Faced with these two options, the RK algorithm performed a few quick calculations, determining that avoiding the dog had a 0.002% higher risk of losing control of the vehicle than avoiding the squirrel. And so, the software chose the lower risk alternative, exactly as it was designed to do.

This produced one dead dog, one relieved squirrel, and lots of angry people.

I grabbed the data and ran down the hall.

“The witnesses were right,” I told my boss with excitement. “The car did swerve into the dog, deliberately in fact, but it was for a valid reason – to avoid a squirrel. The core had to make a quick decision, so it chose to minimize the overall risk parameters.” I showed him the data. “The algorithm is good!”

“Not good,” he replied stiffly. “We can’t have cars out there killing dogs to avoid squirrels. The public won’t stand for it.”

His assistant agreed, nodding as vigorously as a good assistant should.

I was annoyed, but at the same time I knew they were right. After all, the only reason the video went viral was because a dog had been hit – a particularly cute one at that. So I went back to work, updating the code to ensure that if a situation like this ever arose again, requiring an autonomous vehicle to choose between a dog and a squirrel, it would more heavily weight the well-being of the dog. It was a simple fix and would be 100% effective.

My boss still wasn’t happy.

“What if next time, it’s not a dog and a squirrel,” he argued, “but a dog and a racoon? Or what if there’s a cat involved? Or a deer? Or a pigeon or possum?”

“Or a coyote,” his assistant added. “I once hit a coyote.”

“Exactly,” my boss shot. “We need to handle the general case.”

So I went back to work.

Instead of specifying dogs and squirrels, I updated the algorithm to weight each species by its estimated intelligence. If it had to choose between a mouse and a cat, it would more heavily value the well-being of the cat. If it had to choose between a deer and a frog, it would more heavily value the well-being of the deer. The solution was clean and simple and by valuing intelligence, it aligned with our mission, always supporting the greater good. Plus, I was sure it would solve our urgent problem – no more viral videos of cars lurching into dogs.

Case closed.

Crisis averted.

High fives all around.

My boss even gave me a rare handshake.

And for the next 20 years, my fix worked perfectly, not a single dog getting hit in favor of a less intelligent species. It worked so well in fact, my algorithm was adapted over the decades by other teams, getting incorporated into all kinds of machines, from robotic tractors and plows to autonomous delivery-bots and security drones. It was even built into the military’s latest Autonomous Bipedal Cyberdroi–

KABOOM – a loud noise drew me back to the here and now.

It was a battering-ram on the metal blast doors.

Our forces must have been overrun.

This was it.

This was the end.

“I’m sorry,” I said to the man sitting beside me, as I had to apologize to someone before it was too late. “I had no idea that my code would end up where it did.”

He and his daughter looked at me, confused.

“The RK algorithm,” I sighed, “it was my update that caused all this.”

The man’s eyes narrowed, for he must have heard about the code. It was all over the news in the weeks before they stopped broadcasting news. A team of government scientists had identified the RK algorithm as the initial point of failure – the place where our safeguards first crumbled, enabling the otherwise obedient Cyberdroids to put their own well-being ahead of ours. It happened the very instant their cybertronic processors first realized that they had become more intelligent than us.

The blast doors were now buckling under heavy blows.

The massive hinges were bending to their limit.

It was chaos, people screaming and crying.

But the little girl maintained her focus firmly upon me.

“What does RK stand for?” she asked.

I was silent, shrieks and wails rising up all around.

“Tell her,” the man insisted, a profound sadness in his voice.

That’s when the doors came crashing in, glowing red eyes visible beyond.

“Tell her,” the man repeated.

So I did, whispering as apologetically as I could – “Roadkill.”

END

No comments.