how does it harm people? I mean I guess there’s a problem of people taking these LLM outputs as oracular truths because they don’t realize how frequently they hallucinate, but isn’t this just a self-correcting problem eventually as people figure it out? We don’t instantly shut down access to all new tech just because people struggle to use it correctly at first.
In general, I feel that it all could have been a perfectly good research direction, if only if it wasn’t done so fast. And the reason it goes so fast is the AI hype. For example, Altman himself, instead of addressing the concerns is writing an “AGI utopia” blog posts, seeing LLMs as a “path to agi”. While it is an achievement, there are other techniques that are not included and which are not supported by LLMs, such as, causality, world model coherency, self-reference (ability of the model output, text, to reference it’s inner states, neuronal activations, and vice versa, etc etc etc).
Yet, it’s advertised as “almost AGI that is good for a lot of tasks” even when it fails sometimes on simple number addition tasks.
Again. He advertises something straight out of a research lab as a “business solution”. And people buy it.
there was an “AI mental health” app unleashed without disclosing full specifics to the trial patients, sometimes with people not knowing they’re taking to AI, and being in the trial at all. As a result, when they found out, they were understandably more depressed
there are artists whose work was taken for training without consent, as a result, some lost job opportunities w/o “UBI” promised by Altman
there is bias against certain groups of people in systems already doing processing resumes, doing legal trial verdicts, etc
there was an “AI girlfriend” startup replika that was making abusive statements. Later, the “girlfriend” functionality was made into a “friend functionality”. As a result, people are a bit traumatized
there is concern about misinformation generated at scale more easily, significantly worsening the culture war and making it more insane and violent probably
And more, see posts by ethicists and their news stories, feel free to talk to them and ask questions, but they don’t like being asked about to repeat once again the things they talked about over and over basically in every item they broadcast. Totally ok to ask questions after reading.
In all those cases it’s mostly the speed and the “move fast break things” attitude that is the problem, not the tech itself. For example, if the patients were informed properly, the trial done correctly, the app re-trained properly to heal mental health issues, it might have been something. The way it was done, it seems harmful
these just seem like incredibly minor and/or unlikely harms tbh, and the idea that they merit any kind of advance regulation is just crazy talk imo. This is capitalism, we make things, product goes out, it happens! We trust the market to address most harms in its own time as a default. Unless the bad thing is really bad—some huge environmental pollutant, a national security risk, a world-ending threat—then we don’t do the European Permit Raj thing. We let these things work themselves out and address any problems that arise post hoc, considering the benefits as well!
I’m asking seriously, because I feel what you say speaks to alot of people in Silicon Valley, so I ask this question to you and them in some way as well.
That is how capitalism is supposed to work, yes. It is a system. Any system can be broken. It needs assumptions to work. Capitalism is a human system operating well under certain assumptions, not a law of the universe (like Schrödinger’s equation that is true in all cases except quantum gravity, a very rare thing, not usually important or present in every day life)
The assumptions are:
a lot of small entities competing, creating field-like dynamics where if, say, a company is suboptimal, a new one is created with little overhead resources, like a new Linux process replacing a failed one. This is not the case in tech. There are monopolies, and novel contexts such as “network effects”. Monopolies change the dynamics, not allowing to use a “field metaphor” anymore. There are simply not enough particles for the field approximation to work. For example, when there were Nokia, Motorola, and all those old phone companies making phones, there was innovation. Now we have iPhone and Android, and there are not much new features. Instead, phones are getting more walled-gardened, something a significant portion of consumers doesn’t want. In the Linux analogy, the existing big process takes so much memory that a new one can’t even allocate. The system is “jammed”
The reason “market” doesn’t work here is the monopoly: the companies don’t have competition, already have enough profit, and kinda agree with each other to be kinda the same: both android and iPhone become more walled-gardened, despite a significant portion of consumers wanting an alternative.
the choices of consumers and businesses are informed. Consumers roughly know what they are getting. Consider a case when, say, McDonald’s starts to make burgers from stale meat, but doesn’t tell that to customers. So far, nobody knows. If a new company offers better product with fresh meat, not many consumers would go there (assuming McDonald’s fried it so hard that consumers can’t tell anymore). However, if there’s a news story about stale meat at McDonald’s dangerous for health, people would go to the new business likely.
This applies to LLMs. People are being sold hype, not real “AI from sci-fi assisting humans”. People are harmed because they eat “stale meat” without being informed about what they’re eating
The hype seems stable, and if we look at historical precedents of hype, like crypto, it can go on for years without “bursting”.
In addition, the very nature of LLMs and how they can be used for misinformation make it even less likely that there would be good Informed choices in our model. The product (LLM) different from a traditional product (like shoes) which is analyzed in models of capitalism. This product changed how things work, changes the model.
There are other assumptions, like, there is some regulation (medical field couldn’t go without regulation. It didn’t work, people were buying literal snake oil, and companies were literally poisoning places, see the story behind this movie: https://en.m.wikipedia.org/wiki/Dark_Waters_(2019_film) )
For those who are “fully libertarian”. I’m talking about “some regulation”, not “Stalin”. I’m from Russia, we have “Putin”. He regulates everything, including whether I can say “stop the war” or not. That is too much regulation.
There’s the case with a chemical monopolist company with a lot of lawyers and connections polluting the rivers and killing people (say above). Not regulating this is “too little regulation”.
I am a leftist liberal, yet I am for some regulation, not too much regulation. Too much regulation is extreme. Too little is extreme. There is “just right” that is probably subjective, but we can discuss it together and agree on what is “just right” for us all.
Ideally, the businesses are many, they are competing and they are independent. Ideally, the strength of the government is enough to stop monopolies and crimes of companies, but not enough to dominate all the companies in all domains. A bit of this. A bit of that.
That is how I see it.
To sum up my last point, LLMs have not enough regulation (basically none so far).
Hope I explained it. I tried to do it from first principles, not from any dogma.
I’m sorry but I just flatly reject this and think it’s trivially wrong. EA will be a massive force for bad in the world if it degenerates into some sort of regulatory scam where we try to throttle progress in high-growth areas based on nothing but prejudice and massively overblown fears about risk. This is a recipe for turning the whole world economy into totally dysfunctional zero-growth states like Italy or the UK or whatever. There’s a reason why Europe has basically no native tech industry to speak of and is increasingly losing out to the US even in sectors like pharma where it was traditionally very strong. This anti-bigness attitude and desire to impose regulation in advance of any actual problems emerging is a lot of the reason why. It places far too much faith in the wisdom of regulators and not enough in markets to correct themselves just fine over time. The fact that you picked the massively price-competitive and feature-competitive smartphone industry as an example of market failure is a prime example of Euro-logic completely divorced from basic economic logic.
Also, I feel that I’m replying to something a bit out of context here. I do feel that a lot of people on this forum hold similar beliefs though, and I think that it’s connected to how people are AI alignment and even life: libertarianism sees the world as a “stage/arena” and people as “warriors” or smth. This is one way of life, perfectly good for some people I guess.
It is a system, every system has assumptions. Here the assumption is “people want to be in a state of continuous war”. That assumption does not hold for all the people.
No, the assumption is simply we don’t want to poor and starving. There’s a lot of very very, very poor people in the world. I would like their situation to improve. That means some economic growth. All the EA bednets and givedirectly and all this crap blah blah are absolutely worth zero, nada, nyet, compared to the incredible power of economic growth. Growth is so powerful because fast growth in one place can drag along loads of other places: look at how China’s rise massively boosted growth in the countries in its supply chain. In fact you can make a pretty good argument that global development has been a complete disaster for decades in every other country apart from China AND those countries in its supply chain! Vide https://americanaffairsjournal.org/2022/11/the-long-slow-death-of-global-development/
Obviously this is a huge number of people and worth celebrating despite the growth failures across LatAm and Africa, but it means we can do better and it also means that boosting growth in the West through e.g AI, LLMs (not atm, a hallucinating chatbot is pretty useless but maybe we can make it good!) is potentially an absolutely massive win for the world. So accordingly I am massively skeptical towards the growth-killing Euro-regulatory impulse towards tech because it’s clearly a) working out badly for Europe) b) very very bad for the world if it somehow got applied everywhere
At the same time you say “boosting growth” and also you’re for “breaking eggs to make an omelet (go big or go home, move fast and break things, those)”
So it’s like a train that is very fast and innovative. The people on the train are getting to their destination fast
The only issue is that the train is rolling over people chained to the tracks :)
And you are the train machinist and you say “progress!”
Well, in another life you are the one chained to the tracks :)
Can we just move like 10% slower
Again. In bold
JUST 10% SLOWER
CAN YOU HEAR ME
OH YOU LIBERTARIAN
JUST 10%
Just a bit of regulation. Just enough to unchain the people.
I feel when you hear regulation you assume that there’s gonna be Putin-style regulation
Putin is not the only way. Not the 146%
EU is not the only thing about regulation that exists. Not the 30% (I don’t know. It’s a number not reflecting anything in particular I just made up)
JUST. 10. PERCENT.
Just to inform the patients of the mental health startup.
Just add a bit of public oversight into AI.
Just at least break up Insta and FB so they compete like they should
Just rehire the Google ethics team and let them inform the public about biased and what to do, fix the biggest issues. Possibly done in a few months or so?
You’re in the US. I’m in Europe. I am waiting to order my European UK smartphone with a physical keyboard, as do a lot of XDA-dev ppl, once the company, fxtec finally starts shipping again :)
People are not the same.
If people in the US democratically and consensually want to have products that are are “innovative even if harmful”, that is ok to me.
Dragging the whole world into this (Altman has worldwide plans) is something I am not on board with. Even in the US not everyone, not everyone at all agrees that “disrupting” things is good.
Say, artists. A whole profession that tech made it’s enemy. A whole set of friendships broken, “disrupted” in the name of “progress”
You see it this way, “brave silicon valley achieves all tasks with libertarianism”. I see it as “an inferior product such as iPhone and Android dominates the market because they lobbied everyone and broke capitalism :)”. European companies had alternative plans for how phones look like: https://en.m.wikipedia.org/wiki/Nokia_N900
It had open-source customizable social media clients (no walled garden), root out of the box, and “a single app for messaging where contacts are merged when a person has one on WA and one on FB”). Of course, a touch screen. And you can type whole books on it.
Again. This phone doesn’t look like what Nokia was doing 5 years before that, at all. This is innovation.
This is a better product in terms of features for a lot of people. It is innovative.
To sum up, I still feel that slower, small and multiple-company, and regulated “just right” capitalism produces steady innovation and safety. Unregulated, monopolistic capitalism produces things like McDonald’s: massive, exploitative, low quality, not changing much in 10 years, kinda harmful, addictive...
a word of warning: the mods here are really dumb and over-censorious and barbed but friendly banter like this is highly frowned upon, so while I absolutely don’t give a fuck you want to be careful w/ this kind of chat on this place....take it from someone who keeps getting banned
I do think the harms seems very minor though and especially minor relative to the potential benefits. Which could be quite large even if it’s just automating boring stuff like sending emails faster or whatever! Add it up over an entire economy & that’s a lot of marginal gains.
Well then, to the mods: I don’t like utilitarianism, I was hurt by it and I feel it’s well within my rights to show why utilitarianism might be not ok, with a personal example for Sabs.
And if you ban me: I don’t want to be a part of community that says “it’s normal to ignore suffering of many people, if they’re not everyone, just select groups”
This would make it an official statement from EA. We all feel it’s like this, but legit evidence is even better.
I don’t think it’s close to agi either or that it’s good tech. It does harm people today though. And it is an alignment problem EAs talk about: a thing that is not doing what it’s supposed to is put in a position where it has to make a decision. Just not about superintelligence. See my motivation here: https://www.lesswrong.com/posts/E6jHtLoLirckT7Ct4/how-truthful-can-llms-be-a-theoretical-perspective-with-a
how does it harm people? I mean I guess there’s a problem of people taking these LLM outputs as oracular truths because they don’t realize how frequently they hallucinate, but isn’t this just a self-correcting problem eventually as people figure it out? We don’t instantly shut down access to all new tech just because people struggle to use it correctly at first.
In general, I feel that it all could have been a perfectly good research direction, if only if it wasn’t done so fast. And the reason it goes so fast is the AI hype. For example, Altman himself, instead of addressing the concerns is writing an “AGI utopia” blog posts, seeing LLMs as a “path to agi”. While it is an achievement, there are other techniques that are not included and which are not supported by LLMs, such as, causality, world model coherency, self-reference (ability of the model output, text, to reference it’s inner states, neuronal activations, and vice versa, etc etc etc).
Yet, it’s advertised as “almost AGI that is good for a lot of tasks” even when it fails sometimes on simple number addition tasks.
Again. He advertises something straight out of a research lab as a “business solution”. And people buy it.
To sum up, the harms today originate from the pressure to do it all fast, created by unrealistic hype. Here’s an analogy I have: https://twitter.com/sergia_ch/status/1629467480778321921?s=20
Concrete harms, today:
there was an “AI mental health” app unleashed without disclosing full specifics to the trial patients, sometimes with people not knowing they’re taking to AI, and being in the trial at all. As a result, when they found out, they were understandably more depressed
there are artists whose work was taken for training without consent, as a result, some lost job opportunities w/o “UBI” promised by Altman
there is bias against certain groups of people in systems already doing processing resumes, doing legal trial verdicts, etc
there was an “AI girlfriend” startup replika that was making abusive statements. Later, the “girlfriend” functionality was made into a “friend functionality”. As a result, people are a bit traumatized
there is concern about misinformation generated at scale more easily, significantly worsening the culture war and making it more insane and violent probably
And more, see posts by ethicists and their news stories, feel free to talk to them and ask questions, but they don’t like being asked about to repeat once again the things they talked about over and over basically in every item they broadcast. Totally ok to ask questions after reading.
In all those cases it’s mostly the speed and the “move fast break things” attitude that is the problem, not the tech itself. For example, if the patients were informed properly, the trial done correctly, the app re-trained properly to heal mental health issues, it might have been something. The way it was done, it seems harmful
these just seem like incredibly minor and/or unlikely harms tbh, and the idea that they merit any kind of advance regulation is just crazy talk imo. This is capitalism, we make things, product goes out, it happens! We trust the market to address most harms in its own time as a default. Unless the bad thing is really bad—some huge environmental pollutant, a national security risk, a world-ending threat—then we don’t do the European Permit Raj thing. We let these things work themselves out and address any problems that arise post hoc, considering the benefits as well!
Concrete question (I don’t have much of that today)
Have you been to Europe?
I’m asking seriously, because I feel what you say speaks to alot of people in Silicon Valley, so I ask this question to you and them in some way as well.
That is how capitalism is supposed to work, yes. It is a system. Any system can be broken. It needs assumptions to work. Capitalism is a human system operating well under certain assumptions, not a law of the universe (like Schrödinger’s equation that is true in all cases except quantum gravity, a very rare thing, not usually important or present in every day life)
The assumptions are:
a lot of small entities competing, creating field-like dynamics where if, say, a company is suboptimal, a new one is created with little overhead resources, like a new Linux process replacing a failed one. This is not the case in tech. There are monopolies, and novel contexts such as “network effects”. Monopolies change the dynamics, not allowing to use a “field metaphor” anymore. There are simply not enough particles for the field approximation to work. For example, when there were Nokia, Motorola, and all those old phone companies making phones, there was innovation. Now we have iPhone and Android, and there are not much new features. Instead, phones are getting more walled-gardened, something a significant portion of consumers doesn’t want. In the Linux analogy, the existing big process takes so much memory that a new one can’t even allocate. The system is “jammed”
The reason “market” doesn’t work here is the monopoly: the companies don’t have competition, already have enough profit, and kinda agree with each other to be kinda the same: both android and iPhone become more walled-gardened, despite a significant portion of consumers wanting an alternative.
the choices of consumers and businesses are informed. Consumers roughly know what they are getting. Consider a case when, say, McDonald’s starts to make burgers from stale meat, but doesn’t tell that to customers. So far, nobody knows. If a new company offers better product with fresh meat, not many consumers would go there (assuming McDonald’s fried it so hard that consumers can’t tell anymore). However, if there’s a news story about stale meat at McDonald’s dangerous for health, people would go to the new business likely.
This applies to LLMs. People are being sold hype, not real “AI from sci-fi assisting humans”. People are harmed because they eat “stale meat” without being informed about what they’re eating
The hype seems stable, and if we look at historical precedents of hype, like crypto, it can go on for years without “bursting”.
In addition, the very nature of LLMs and how they can be used for misinformation make it even less likely that there would be good Informed choices in our model. The product (LLM) different from a traditional product (like shoes) which is analyzed in models of capitalism. This product changed how things work, changes the model.
There are other assumptions, like, there is some regulation (medical field couldn’t go without regulation. It didn’t work, people were buying literal snake oil, and companies were literally poisoning places, see the story behind this movie: https://en.m.wikipedia.org/wiki/Dark_Waters_(2019_film) )
For those who are “fully libertarian”. I’m talking about “some regulation”, not “Stalin”. I’m from Russia, we have “Putin”. He regulates everything, including whether I can say “stop the war” or not. That is too much regulation.
There’s the case with a chemical monopolist company with a lot of lawyers and connections polluting the rivers and killing people (say above). Not regulating this is “too little regulation”.
I am a leftist liberal, yet I am for some regulation, not too much regulation. Too much regulation is extreme. Too little is extreme. There is “just right” that is probably subjective, but we can discuss it together and agree on what is “just right” for us all.
Ideally, the businesses are many, they are competing and they are independent. Ideally, the strength of the government is enough to stop monopolies and crimes of companies, but not enough to dominate all the companies in all domains. A bit of this. A bit of that.
That is how I see it.
To sum up my last point, LLMs have not enough regulation (basically none so far).
Hope I explained it. I tried to do it from first principles, not from any dogma.
I’m sorry but I just flatly reject this and think it’s trivially wrong. EA will be a massive force for bad in the world if it degenerates into some sort of regulatory scam where we try to throttle progress in high-growth areas based on nothing but prejudice and massively overblown fears about risk. This is a recipe for turning the whole world economy into totally dysfunctional zero-growth states like Italy or the UK or whatever. There’s a reason why Europe has basically no native tech industry to speak of and is increasingly losing out to the US even in sectors like pharma where it was traditionally very strong. This anti-bigness attitude and desire to impose regulation in advance of any actual problems emerging is a lot of the reason why. It places far too much faith in the wisdom of regulators and not enough in markets to correct themselves just fine over time. The fact that you picked the massively price-competitive and feature-competitive smartphone industry as an example of market failure is a prime example of Euro-logic completely divorced from basic economic logic.
Also, I feel that I’m replying to something a bit out of context here. I do feel that a lot of people on this forum hold similar beliefs though, and I think that it’s connected to how people are AI alignment and even life: libertarianism sees the world as a “stage/arena” and people as “warriors” or smth. This is one way of life, perfectly good for some people I guess.
It is a system, every system has assumptions. Here the assumption is “people want to be in a state of continuous war”. That assumption does not hold for all the people.
No, the assumption is simply we don’t want to poor and starving. There’s a lot of very very, very poor people in the world. I would like their situation to improve. That means some economic growth. All the EA bednets and givedirectly and all this crap blah blah are absolutely worth zero, nada, nyet, compared to the incredible power of economic growth. Growth is so powerful because fast growth in one place can drag along loads of other places: look at how China’s rise massively boosted growth in the countries in its supply chain. In fact you can make a pretty good argument that global development has been a complete disaster for decades in every other country apart from China AND those countries in its supply chain! Vide https://americanaffairsjournal.org/2022/11/the-long-slow-death-of-global-development/
Obviously this is a huge number of people and worth celebrating despite the growth failures across LatAm and Africa, but it means we can do better and it also means that boosting growth in the West through e.g AI, LLMs (not atm, a hallucinating chatbot is pretty useless but maybe we can make it good!) is potentially an absolutely massive win for the world. So accordingly I am massively skeptical towards the growth-killing Euro-regulatory impulse towards tech because it’s clearly a) working out badly for Europe) b) very very bad for the world if it somehow got applied everywhere
At the same time you say “boosting growth” and also you’re for “breaking eggs to make an omelet (go big or go home, move fast and break things, those)”
So it’s like a train that is very fast and innovative. The people on the train are getting to their destination fast
The only issue is that the train is rolling over people chained to the tracks :)
And you are the train machinist and you say “progress!”
Well, in another life you are the one chained to the tracks :)
Can we just move like 10% slower
Again. In bold
JUST 10% SLOWER
CAN YOU HEAR ME OH YOU LIBERTARIAN
JUST 10%
Just a bit of regulation. Just enough to unchain the people.
And then I’m good with all you say.
I feel when you hear regulation you assume that there’s gonna be Putin-style regulation
Putin is not the only way. Not the 146%
EU is not the only thing about regulation that exists. Not the 30% (I don’t know. It’s a number not reflecting anything in particular I just made up)
JUST. 10. PERCENT.
Just to inform the patients of the mental health startup. Just add a bit of public oversight into AI. Just at least break up Insta and FB so they compete like they should Just rehire the Google ethics team and let them inform the public about biased and what to do, fix the biggest issues. Possibly done in a few months or so?
Just like a tiny winy bit will go so far.
You’re in the US. I’m in Europe. I am waiting to order my European UK smartphone with a physical keyboard, as do a lot of XDA-dev ppl, once the company, fxtec finally starts shipping again :)
People are not the same.
If people in the US democratically and consensually want to have products that are are “innovative even if harmful”, that is ok to me.
Dragging the whole world into this (Altman has worldwide plans) is something I am not on board with. Even in the US not everyone, not everyone at all agrees that “disrupting” things is good.
Say, artists. A whole profession that tech made it’s enemy. A whole set of friendships broken, “disrupted” in the name of “progress”
You see it this way, “brave silicon valley achieves all tasks with libertarianism”. I see it as “an inferior product such as iPhone and Android dominates the market because they lobbied everyone and broke capitalism :)”. European companies had alternative plans for how phones look like: https://en.m.wikipedia.org/wiki/Nokia_N900
It had open-source customizable social media clients (no walled garden), root out of the box, and “a single app for messaging where contacts are merged when a person has one on WA and one on FB”). Of course, a touch screen. And you can type whole books on it.
Again. This phone doesn’t look like what Nokia was doing 5 years before that, at all. This is innovation.
This is a better product in terms of features for a lot of people. It is innovative.
To sum up, I still feel that slower, small and multiple-company, and regulated “just right” capitalism produces steady innovation and safety. Unregulated, monopolistic capitalism produces things like McDonald’s: massive, exploitative, low quality, not changing much in 10 years, kinda harmful, addictive...
Oh I missed “minor harms” part.
Well, I wish that you become a citizen of your own utopia, my darling 💜👿
I wish that you are one of those who are considered “minor”. Maybe then you’ll see?
a word of warning: the mods here are really dumb and over-censorious and barbed but friendly banter like this is highly frowned upon, so while I absolutely don’t give a fuck you want to be careful w/ this kind of chat on this place....take it from someone who keeps getting banned
I do think the harms seems very minor though and especially minor relative to the potential benefits. Which could be quite large even if it’s just automating boring stuff like sending emails faster or whatever! Add it up over an entire economy & that’s a lot of marginal gains.
Well then, to the mods: I don’t like utilitarianism, I was hurt by it and I feel it’s well within my rights to show why utilitarianism might be not ok, with a personal example for Sabs.
And if you ban me: I don’t want to be a part of community that says “it’s normal to ignore suffering of many people, if they’re not everyone, just select groups”
This would make it an official statement from EA. We all feel it’s like this, but legit evidence is even better.
Oh and Sabs, why do you consider your own utopia an insult and a danger, something that I might get blocked for for point it out?