Can/​should we automate most human decisions, pre-AGI?

Meta

TLDR

  • It might be possible to automate most currently-common[1] human decisions in the next ~30 years, in large part, because many humans are pretty bad at making decisions normally.

  • We can think of most sorts of decision automation using levels similar to autonomous vehicles. We’ll want to begin at levels 1-2 for many things, then gradually work our way up to levels 3-5.

  • Decision automation can be great or terrible for humanity. I lean positive.

  • A lot of software already does decision automation, but there are ways to speed up decision automation in new areas.

  • General-purpose tools to do decision automation might be particularly relevant.

  • Decision automation is one intervention within wisdom and intelligence, that represents a very different path to decision improvement than rationality workshops or much of institutional decision making.

Epistemic Status
Quickly written (~5 hours), uncertain. I’ve been thinking about this area a lot over the last 5-10 years or so. I haven’t formally studied decision automation.

History
This was originally posted to Facebook here.

QURI
QURI is focused on making estimation infrastructure, which is a subset of decision automation.

The Key Points

A lot of people (including me!) make a lot of pretty stupid decisions.

Software is becoming much better at making decisions.

It seems, surprisingly easy to me (maybe $100 billion of tech effort over 20 years), to imagine making systems that would outperform the majority of people’s top 10,000 decisions per year. (Incredibly rough ballparking)

For example:

  • Which of these n jobs would be best for me?

  • Which menu option should I order, at a restaurant?

  • What sorts of medical interventions should I get?

  • This nice-looking salesperson is trying to sell me on a new home loan. Should I go along with this?

  • Does this business deal seem fishy?

  • How should I handle this social situation? Is this person angry at me or just frustrated with other things?

  • Should I move to a different country? Which one?

  • What major should I choose in college?

  • Which suppliers should our company use?

  • Which person should our firm hire for this job? (This will require some human input)

  • What writing changes should we make to this technical report?

It’s true that doing a great job at any of these questions would be incredibly tough, perhaps AGI-complete.

But often, the alternative is not a great decision; it’s a really mediocre decision, sometimes after a whole lot of grief to do the deciding. The bar is often really, really low.

Decision automation doesn’t have to be highly accurate to be preferable to many human decisions. It’s typically dramatically faster and cheaper. Human decision alternatives are very often highly inaccurate. Also, you don’t need to replace human decisions, you can simply make suggestions and provide extra information. Levels 1 and 2 of car autonomy can go a long way, before aiming for levels 3 and 4.

Daniel Kahneman has written extensively about how often simple algorithms do better than personal intuitions. Being clever about applying many more simple algorithms would get us pretty far, but of course, we could go further with more complex algorithms.

We already really have a lot of decision automation.

  • People have been trusting GPS navigation systems for a while, and are starting to trust AI with larger-scale driving decisions.

  • Financial decisions have become largely automated with robo-advisors. Siri/​Google Assistant make direct recommendations (“Would you like me to open up the messaging app, for you to message Sophie?”) and are becoming more intelligent quickly.

  • Email spam detection has gotten to be pretty good. It’s often not as good as human judgment, but it’s done much faster.

  • Spell checkers and grammar checkers like Grammarly are becoming much more powerful.

Decision automation, called as such, is an established field. I believe automated decision-making has been discussed since the early days of artificial intelligence.[2]

My hunch is that overall, this is really really good. A society that makes better decisions is one that prospers.[5]

There are definitely dangers. Perhaps this decision automation will make large groups of humans even less capable of making basic decisions. Perhaps it would lead to a much more complex world that society couldn’t actually steer.

But the other side is also very enticing. The less I have to worry about which dentist to use, the more I can worry about the global problems that we can’t simply solve with technology. We know that many people don’t have the time to be educated enough to make decent political decisions anyway. (See The Myth of the Rational Voter and that cluster of thinking)

I think people now personally identify with many of their decisions, so might be kind of freaked out initially, but I’d expect that in practice it will be pretty fine.

Some people identified as great or unusual drivers so were unhappy with driving automation. Some people identified as great accessors of product quality before Amazon reviews were a thing. But I think, on the whole, most people are happy to just focus on things that can’t be so easily done with software.

“Decision Automation Couldn’t Improve My Decisions”

I think it’s easy for smart people[4] reading this to think

It would be tough to improve on my opinion on important things. I’m quite well researched in my opinion.

Some responses:

  1. People are often highly overconfident of their abilities to make decisions well.

  2. There are many people in the world without as much education, talent, and domain knowledge as you. I’m imagining an 80-year old grandma who has to choose a health plan or a person who’s completely ignored health science trying to choose a doctor. Even if automation were only used by other people, it could still go a long way.

  3. Decision recommendations could be almost free and could act as a supplement. It doesn’t have to be a replacement. It could just be used to catch occasional exceptions. “Levels 1 and 2” of automation for most decisions would still be very useful.

  4. Your opinion in some of these domains was costly to build. If you would have known that automation was coming, you might not have made the investment. (i.e. doing calculation by hand vs. using a calculator)

General-Purpose Decision Automation

As argued above, a whole lot of software right now is already doing decision automation. We can see the trajectory of software, and might thus conclude that the future of decision automation will just be what we already expected of software. This might not seem very exciting. Software is advancing quickly, but not that quickly.

Right now, decision automation is often highly specialized. There are autonomous driving systems that require massive engineering efforts and won’t have any impact outside of driving decisions.[2] There are email spam detection systems that only apply for email spam. If we have 50,000 types of decisions, and we apply these strategies, we might need 50,000 unique engineering efforts. Naively, this will take a long time.

One big question is if there can be new general-purpose methods that could be applied to many yet-attempted forms of decision automation. Think of cross-domain tools like Airtable or various AWS services. I imagine some key uncertainties include:

  1. How much will general-purpose ML tools (like language models) be useful for decision automation across different domains?

  2. How much will improvements in estimation technologies (probabilistic programming, probabilistic libraries, forecasting platforms) be useful for decision automation across different domains?

  3. Are there other clever general-purpose workflows that could be constructed to allow for decision automation in many domains?

Language models are clearly advancing rapidly. Estimation technologies are advancing much slower, but are much more neglected. I haven’t seen many clever general-purpose workflows, but could easily imagine them (but wouldn’t be particularly optimistic here).

ML techniques would introduce all the risks associated with ML techniques, and if possible, should be handled with the care we would hope for ML applications. We might well prefer to focus on non-ML techniques for decision areas that might be dangerous.

Applications for Effective Altruism

Decision automation is very arguably part of Wisdom and Intelligence, or Institutional Decision-Making. It can be useful in the same ways those can be useful.

Around the effective altruism and rationality communities, there’s been much more attention on ways to educate people to become more rational and wise, than on decision automation. See CFAR and other rationally bootcamps, for example. But certain clusters of decision automation might be much easier. It’s very difficult to train people to think significantly better, but dramatically easier to say, “just look at this website and do what it says.



[1] Once some decisions are automated, humans are likely to spend more time on other decisions. So it might be incredibly difficult to automate “all decisions we’ll ever have”, but still realizable to automate “most decisions we have right now.

[2] For a while, anyway. Elon Musk claimed that Tesla will be able to use its autonomous competencies for other robotics. We’ll see how this holds up.

[3] I’m highlighting this because I’ve heard it a few times in person, often from pretty smart people.

[4] One tricky point is that a whole lot of software period is basically doing light decision automation, but isn’t often referred to specifically as such. I think “decision automation” has been used by certain enterprise players to mean fairly narrow things, and other vendors didn’t want to be associated with those groups. But for our purposes, and I think for most reasonable definitions we might have of decision automation, a lot of software should count.

[5] Otherwise is clearly possible, but I think less likely.