Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Great article!
The analogy to the economy at the end is wonderful. A lot of us don’t realise how badly the economy works. But it’s easy to see by just thinking about AI and what’s happening right now. People are speculating that AI might one day do as much as 50% of the work now done by humans. A naive outsider might expect us to be celebrating in the streets and introducing a 3-day work-week for everyone. But instead, because our economy works the way it does, with almost all of most people’s income directly tied to their “jobs”, the reaction is mostly fear that it will eliminate jobs and leave people without any income.
I’m guessing that the vast majority of people would love to move to a condition (which AI could enable) where everyone works only 50% as much but we keep the same benefits. But there is no realistic way to get there with our economy, at least not quickly. Even if we know what we want to achieve, we just cannot overcome all the barriers and Nash equilibria and individual interests. We understand the principles of each different part of the economy, but the whole picture is just far too complex for anyone to understand or for us, even with total collaboration, to manipulate effectively.
I’m sure that if we were trying to design the economy from scratch, we would not want to create a system in which a hedge-fund manager can earn 1000 times as much as a teacher, for example. But that’s what we have created. If we cannot control the incentives for humans within a system that we fundamentally understand, how well can we control the incentives for an AI system working in ways that we don’t understand?
It’s worrying. And yet, AI can do so much good in so many ways for so many people, we have to find the right way forward.
I think what matters here is having a kill switch or some set of parameters like [if <situation> occurs, kill] or some sort of limitation to the purview or what a particular model can undertake. If we keep churning out models trained in a general way, there is a high probability of running riot one day but if there are limitations to what they can do, which unfortunately will be undermining the reason we deploy AI in the first place but at is stands now, we need something urgent to keep this existential risk at bay or perhaps it’s our paranoia running riot… Perhaps not.