AI safety
had a health crisis
Getting back on track
sergeivolodin
See my other comment in the same thread
I’m not insulting you. I’m challenging your belief.
And, where is the insult? Which line?.. I’m saying that the red pill ideology is fascist. How does it insult you? Well, unless...
And yes, I think that if his multiple wifes all say kinda the same thing, it’s legit evidence. Yes.
And yes, I believe this is relevant for alignment. Directly. A community of red pillers creates an AI. Where would it go and what would it do?..
You assume that “anything not super big gets ignored”. The world of “grand battles” is good for some, not all. Same as the world of “small independent entities” is good for some, not all.
Alignment, however, is for the whole of humanity.
So. What do we do with this?
I totally feel this isn’t the only choice to do things. There are massive crowdfunding campaigns that work.
I think that an entity that is not opposing itself to the power in any way has its own limitations, serious limitations.
Here’s an example from Russia where some charities collect money, but HAVE to say they’re pro-government.
In many cases those were criticised, and I think justly, that they created more troubles than the effects of their charity.
For example, some used TV ads to gather money for cancer treatment for children.
However, the real problem is: Putin used all the taxes and gas profits on his wars and “internet research” operations, as well as personal luxury items.
So these charities, some argue, were used as a “front” by the government to convince people that “medicine is OK, no need to worry”
Those charities only helped like, the few, and some argue, if they didn’t exist, at all, people wouldn’t have a false belief that “healthcare works fine in Russia”, and would protest and maybe we could get it.
All because of charity’s inability to protest against existing power structures.
I think it applies to alignment too, it’s hard to do alignment when one gets funding from a corp that has a financial interest in “profit first safety second”
Here we have Microsoft’s CEO saying they’re “gonna make Google dance” with my comments about how Microsoft’s CEO sounds like a comic book villain
https://twitter.com/sergia_ch/status/1624438579412799488?s=20
If being serious, I don’t feel it when thinking of the phrase “Google just invested into Anthropic to advance AI safety”. Just don’t feel it.
Don’t know why, maybe because of how Google handled it ethics team? Or when they said “were not gonna be doing weapons” and then like, started doing it? Just seems like something rather likely if we consider their character inferred from their previous actions that they want their own chat bot, to show everyone how smart they are (regardless of the consequences)
Once a prof told me how he sees the ML field: people there don’t do it for “humanity” or “knowledge”, he told me it’s because they want to show how their stuff is superior to someone else’s and show off.
Not everyone’s like this, of course, but ML/tech has this vibe—people from the first row of seats from school who don’t know anything about the real world and instead try to impress the teacher, living of petty drama between same people on the front row.
A lot of people like this in ML
Saying this as ex one of those people.
To sum up, here’s my personal story as one who was in the field, and as in another reply, I invite you to form your own understanding based on whatever you like.
I can’t convince you, I only have a personal story as an AIS beginning researcher, I don’t have statistics and expected value calculations people here seem to want.
Thank you
Oh I missed “minor harms” part.
Well, I wish that you become a citizen of your own utopia, my darling 💜👿
I wish that you are one of those who are considered “minor”. Maybe then you’ll see?
First good proposal! That’s what we’re here for
C’mon people, we can do it !! 💋💋
To sum up my other comment, yes, I want to confront you with normalizing red pill. I think it’s fascist and dehumanising.
Yes, I also think it’s relevant to AI alignment, because a community that is not aligned itself, that is “at war” between it’s own genders (tech people), is unlikely to align something else well.
Saying this as a person from a fascist country who kinda supports an ex-fascist politician trying to do better and be kinder (see Navalny)
Saying this as a sexual abuser and mentally abused.
Saying this as one who apologized and saw that what I did was wrong. And one who now sees how stupid and unnecessary it was.
Saying this as one who talked to pro-Putin people a lot to understand how this all works.
There are ways to have both emotion and logic at peace and harmony. Together. Not at war.
Red pill ain’t it.
It’s extreme, agressive, ugly, stolen, perverted, dead.
Which “right wing” do you mean? I think it was about “small government” (but not “zero government”).
How is “red pill” related to “small government”? :)
You’re using the other “right wing”, which is something related to traditional family. That is one step there—a patriarch in the family. “Red pill” is asserting that it has enough explanatory power to overwhelm the aspect of free will in decisions of women and men, that the “sexual market” is a more clear explanation for how relationships go.
I’d say it’s a bit of an extreme step, because it claims a single simple objective for the whole of humanity: “women procreate, men fight”, creating a “stereotype of masculinity” being about “winning fights, physical or metaphorical”.
This theory completely ignores male singers who don’t seem to be into this stereotype. Some women loved Michael Jackson, and he doesn’t seem to be the “fighting type”, rather the feelings one.
This theory has blind spots, and is asserted quite forcefully: it has a mechanism of one being scared that they’re “poisoning their market value” if they do something out of line, seen in “chad/incel” memes for example.
Saying this as a person from Russia who saw the rise of fascism in our country, how our culture war went from the internet to the battlefield. I believed in this. I have seen this to be false. Saying this as a person who is responsible for sexual assault and who tries to heal and be better. “Red pill” is b.s. see my posts on Mastodon to see more on this.
It’s an extreme theory that ignores important corner cases (queer people), and tends to make people resentful towards anything not fitting in the theory, all while taking away “free will” to replace it with a “simple objective function”, without any research and clear outliers/exceptions, and is linked to male violence. Ironically, turning people into machines, the very thing the real “red pill from the movie” was not really pro: the concept name itself is stolen from a movie by trans authors and basically turned upside down in an evil twisted way: Neo was like “I’m gonna talk to the machines and bring peace to y’all. The war is gonna end”. Red pillers are like “we like guns, force and fighting and don’t like to talk about complex things much” 🤷♀️
“Small government” right wing is not a pejorative. “Traditional cisgender relationship with a man deciding things” is ok too if a women likes it too (and not forcefully taken into that). “Red pill” is, like, way out there for me—it’s a notion that a man can take any woman—nonsense if we consider that some women cheered when Trump was like “grab them and such”, and some women would not like a single violation of consent, like tagging on Twitter. Women are just people. People are different.
Musk’s ex-wife made a song about him.
https://m.youtube.com/watch?v=ADHFwabVJec
“If I loved him any less, I’d make him stay But he has to be the best , player of games”
She asserts she is aware of the ongoing “push-pull” pickup artistry from him, but refuses to apply it herself to achieve her goal, then says thay the dude is always at work basically
I’d say by the video, using subjective holistic judgement, that he’s legit red-pilled.
And my post above says that red pill is extreme and linked to fascism.
And I say it’s related to so many cases of sexual assault in tech, EA, finance—people see “simple markets” where there’s just so much more complexity, and not much markets necessarily :)
Bite me :)
Also, I feel that I’m replying to something a bit out of context here. I do feel that a lot of people on this forum hold similar beliefs though, and I think that it’s connected to how people are AI alignment and even life: libertarianism sees the world as a “stage/arena” and people as “warriors” or smth. This is one way of life, perfectly good for some people I guess.
It is a system, every system has assumptions. Here the assumption is “people want to be in a state of continuous war”. That assumption does not hold for all the people.
You’re in the US. I’m in Europe. I am waiting to order my European UK smartphone with a physical keyboard, as do a lot of XDA-dev ppl, once the company, fxtec finally starts shipping again :)
People are not the same.
If people in the US democratically and consensually want to have products that are are “innovative even if harmful”, that is ok to me.
Dragging the whole world into this (Altman has worldwide plans) is something I am not on board with. Even in the US not everyone, not everyone at all agrees that “disrupting” things is good.
Say, artists. A whole profession that tech made it’s enemy. A whole set of friendships broken, “disrupted” in the name of “progress”
You see it this way, “brave silicon valley achieves all tasks with libertarianism”. I see it as “an inferior product such as iPhone and Android dominates the market because they lobbied everyone and broke capitalism :)”. European companies had alternative plans for how phones look like: https://en.m.wikipedia.org/wiki/Nokia_N900
It had open-source customizable social media clients (no walled garden), root out of the box, and “a single app for messaging where contacts are merged when a person has one on WA and one on FB”). Of course, a touch screen. And you can type whole books on it.
Again. This phone doesn’t look like what Nokia was doing 5 years before that, at all. This is innovation.
This is a better product in terms of features for a lot of people. It is innovative.
To sum up, I still feel that slower, small and multiple-company, and regulated “just right” capitalism produces steady innovation and safety. Unregulated, monopolistic capitalism produces things like McDonald’s: massive, exploitative, low quality, not changing much in 10 years, kinda harmful, addictive...
That is how capitalism is supposed to work, yes. It is a system. Any system can be broken. It needs assumptions to work. Capitalism is a human system operating well under certain assumptions, not a law of the universe (like Schrödinger’s equation that is true in all cases except quantum gravity, a very rare thing, not usually important or present in every day life)
The assumptions are:
a lot of small entities competing, creating field-like dynamics where if, say, a company is suboptimal, a new one is created with little overhead resources, like a new Linux process replacing a failed one. This is not the case in tech. There are monopolies, and novel contexts such as “network effects”. Monopolies change the dynamics, not allowing to use a “field metaphor” anymore. There are simply not enough particles for the field approximation to work. For example, when there were Nokia, Motorola, and all those old phone companies making phones, there was innovation. Now we have iPhone and Android, and there are not much new features. Instead, phones are getting more walled-gardened, something a significant portion of consumers doesn’t want. In the Linux analogy, the existing big process takes so much memory that a new one can’t even allocate. The system is “jammed”
The reason “market” doesn’t work here is the monopoly: the companies don’t have competition, already have enough profit, and kinda agree with each other to be kinda the same: both android and iPhone become more walled-gardened, despite a significant portion of consumers wanting an alternative.
the choices of consumers and businesses are informed. Consumers roughly know what they are getting. Consider a case when, say, McDonald’s starts to make burgers from stale meat, but doesn’t tell that to customers. So far, nobody knows. If a new company offers better product with fresh meat, not many consumers would go there (assuming McDonald’s fried it so hard that consumers can’t tell anymore). However, if there’s a news story about stale meat at McDonald’s dangerous for health, people would go to the new business likely.
This applies to LLMs. People are being sold hype, not real “AI from sci-fi assisting humans”. People are harmed because they eat “stale meat” without being informed about what they’re eating
The hype seems stable, and if we look at historical precedents of hype, like crypto, it can go on for years without “bursting”.
In addition, the very nature of LLMs and how they can be used for misinformation make it even less likely that there would be good Informed choices in our model. The product (LLM) different from a traditional product (like shoes) which is analyzed in models of capitalism. This product changed how things work, changes the model.
There are other assumptions, like, there is some regulation (medical field couldn’t go without regulation. It didn’t work, people were buying literal snake oil, and companies were literally poisoning places, see the story behind this movie: https://en.m.wikipedia.org/wiki/Dark_Waters_(2019_film) )
For those who are “fully libertarian”. I’m talking about “some regulation”, not “Stalin”. I’m from Russia, we have “Putin”. He regulates everything, including whether I can say “stop the war” or not. That is too much regulation.
There’s the case with a chemical monopolist company with a lot of lawyers and connections polluting the rivers and killing people (say above). Not regulating this is “too little regulation”.
I am a leftist liberal, yet I am for some regulation, not too much regulation. Too much regulation is extreme. Too little is extreme. There is “just right” that is probably subjective, but we can discuss it together and agree on what is “just right” for us all.
Ideally, the businesses are many, they are competing and they are independent. Ideally, the strength of the government is enough to stop monopolies and crimes of companies, but not enough to dominate all the companies in all domains. A bit of this. A bit of that.
That is how I see it.
To sum up my last point, LLMs have not enough regulation (basically none so far).
Hope I explained it. I tried to do it from first principles, not from any dogma.
I acknowledge and agree with your criticism.
I did question these assumptions (“we do capabilities to increase career capital, and somehow stay in this phase almost forever” and such) since 2020 in the field, talking to people directly. The reactions and disregard I got is the reason I feel the way I feel about all this.
I was thinking “yes, I am probably just not getting it, I will ask politely”. The replies I got were what’s causally preceding me feeling this way.
I am traumatized and I don’t want to engage fully logically here, because I feel pain when I do that. I was writing a lot of logical texts and saying logical things, only to be dismissed kinda, like “you’re not getting it, we are going to the top of this, maybe you need to be more comfortable with power” or something like this.
Needless to say, I have pre-existing trauma about a similar theme from childhood, family etc.
I do not pretend to be an objective EA doing objective things. After all, we don’t have much objective evidence here except for news articles about Anthropic 🤷♀️
So, what I’m doing here is simply expressing how I feel, expressing that I feel a bit powerless about this problem, and asking for help in solving it, inquiring about it, and making sure something is done.
I can delete my post if there is a better post and the community thinks my post is not helpful.
I want to start a discussion, but all I have is a traumatized mind tired of talking about it, which tried every possible measure I could think of.
I leave it up to you, the community, people here to decide—post a new post, ignore it, keep this one and the new one, or only the new one, or write Anthropic people directly, or go to the news, or ask them on Twitter, or anything you can think of—I do not have the mental capacity to do it.
All I can is to write that I feel bad about it, that I’m tired, that I don’t feel my CS skills would be used for good if I joined AIS research today, that I’m disillusioned, and that I ask the community, people who feel the same, to do something if they want to.
I do not claim factual accuracy or rationality metrics. Just raw experience, for you to serve as a starting point in your own actions about this, if you are interested.
My mind now can do talks about feelings, so I talk about feelings. I think feelings are good way to express what I want to say. So I went with this.
That is all. That is all I do here. Thank you.
In general, I feel that it all could have been a perfectly good research direction, if only if it wasn’t done so fast. And the reason it goes so fast is the AI hype. For example, Altman himself, instead of addressing the concerns is writing an “AGI utopia” blog posts, seeing LLMs as a “path to agi”. While it is an achievement, there are other techniques that are not included and which are not supported by LLMs, such as, causality, world model coherency, self-reference (ability of the model output, text, to reference it’s inner states, neuronal activations, and vice versa, etc etc etc).
Yet, it’s advertised as “almost AGI that is good for a lot of tasks” even when it fails sometimes on simple number addition tasks.
Again. He advertises something straight out of a research lab as a “business solution”. And people buy it.
To sum up, the harms today originate from the pressure to do it all fast, created by unrealistic hype. Here’s an analogy I have: https://twitter.com/sergia_ch/status/1629467480778321921?s=20
Concrete harms, today:
-
there was an “AI mental health” app unleashed without disclosing full specifics to the trial patients, sometimes with people not knowing they’re taking to AI, and being in the trial at all. As a result, when they found out, they were understandably more depressed
-
there are artists whose work was taken for training without consent, as a result, some lost job opportunities w/o “UBI” promised by Altman
-
there is bias against certain groups of people in systems already doing processing resumes, doing legal trial verdicts, etc
-
there was an “AI girlfriend” startup replika that was making abusive statements. Later, the “girlfriend” functionality was made into a “friend functionality”. As a result, people are a bit traumatized
-
there is concern about misinformation generated at scale more easily, significantly worsening the culture war and making it more insane and violent probably
And more, see posts by ethicists and their news stories, feel free to talk to them and ask questions, but they don’t like being asked about to repeat once again the things they talked about over and over basically in every item they broadcast. Totally ok to ask questions after reading.
In all those cases it’s mostly the speed and the “move fast break things” attitude that is the problem, not the tech itself. For example, if the patients were informed properly, the trial done correctly, the app re-trained properly to heal mental health issues, it might have been something. The way it was done, it seems harmful
-
How do you feel about the “red pill” they seem to embrace (Musk openly and Thiel by evidence)? Do you feel this worldview affects their actions? Do you think it is extreme? Which political affiliation does “red pill” seem to belong to—left or right? Do you believe in those “sexual markets” stuff? Thank you for your replies.
I don’t think it’s close to agi either or that it’s good tech. It does harm people today though. And it is an alignment problem EAs talk about: a thing that is not doing what it’s supposed to is put in a position where it has to make a decision. Just not about superintelligence. See my motivation here: https://www.lesswrong.com/posts/E6jHtLoLirckT7Ct4/how-truthful-can-llms-be-a-theoretical-perspective-with-a
My emotional state is relevant here. I’m one of the people who was excited about safety. Then I slowly was seeing how the plan is shaky and decisions are controversial (advertise OpenAI jobs, do the “first we get a lot of capabilities skills the do safety” which usually means a capabilities person with an EA t-shirt and not much safety).
My emotional state summarises the history that happened to me. It is relevant to my case: I am showing you how you would feel if you went through my experience, in case if you choose to believe it.
It’s not a “side note”, it’s my evidence I’m showing to say “I have concerns and this feels off, rather a pattern than a one-off case”. Emotions are good for holistic reasoning.
I don’t have the energy to write a “full-fledged EA post with dots over is and all that”. I mean, I feel I’m one of the “plaintiffs” in this case. I believed EA, I trusted all those forum posts. Now I see something is wrong. I am simply asking other people to look into this, say how they feel and think about this.
So we figure out something together.
I feel I need support. No, I don’t want to go to the FB mental health EA support group, because this is not about mental health specifically—it’s about how the field of AI safety is. It’s not that “I feel bad because of a chemical imbalance in my mind”. I feel bad because I see bad things :)
I have written at length on Twitter about my experiences. If you’re still interested, I can link it a bit later.
For a traumatized person it’s painful to go though all this again and again.
Thank you.
I got a bit disappointing answers on LW—people telling me how “this will not have impact” instead of answering the questions :) seriously, I think that it’s a 15 minute problem for someone who knows theoretical CS well. It could have some impact on a very hard problem. Not the best option probably, but what is better?
Isn’t it easier to spend 15 minutes to work on a CS theory problem, meeting new ppl, learning something ,instead of coming up with a long explanation of “why this is not the best choice”?
I’m a feminist but I’ll give a trad cis example to illustrate this because I don’t expect a feminist one to go well here (am I wrong?). In How I Met Your Mother the womanizer character Barney Stinson once had an issue. Women were calling him every minute and wanting to meet him. He couldn’t choose which one is “the best” choice. As a result he didn’t get to know any of them.
https://m.youtube.com/watch?v=_twv2L_Cogo
I feel it’s the same—so much energy spent on “if it’s the best thing to do” that even 15 minutes will not be spent on something new. Illusion of exploration—not actually trying the new thing but rather just quickly explaining why it’s “not the best”, spending most of the time “computing the best thing” and not actually doing it...
Am I not seeing it right? Am I missing something?
Or maybe people love Peter Thiel, Musk and red pills here? In this case, I guess, there’s not much to discuss. At least I expected some answers. It’s as if people don’t even bother to explain what is being done—just assumed to be “correct”?
It is rambling and incoherent. See why here: https://forum.effectivealtruism.org/posts/bmfR73qjHQnACQaFC/call-to-demand-answers-from-anthropic-about-joining-the-ai?commentId=EaBHtEpJCEv4HnQky
It’s a part of what I’m talking about here