AI safety
had a health crisis
Getting back on track
sergeivolodin
Correction: “now” replaced by “the sooner the better” :)
Those who downvote:
Here’s my number +41 78 732-01-34 Here’s my email: sergia94@protonmail.com
Here’s my address:
Langstrasse 213 Room 43c 8005 Zurich Switzerland
Tell us why, now. Or be quiet from this point on forever
Do you believe in it?
Just seems weird if someone said “to be safe from a deadly disease, what we really need is to develop it as soon as we can”
I get that the metaphore has holes, just, seems a bit “out there”.
I’d say that “to have safe agi, we need to do agi engineering the fastest way possible” is a very extraordinary claim.
It requires very extraordinary evidence to support it.
My thing which is “can we ask them to explain it” seems like a very ordinary claim to me.
So it doesn’t require much evidence at all.
Yes, it’s about me, I’m a trans girl from Russia. Yes I’m saying that it would be weird to me if I do something with the EA community.
People here believe it’s ok to believe in “red pill” (not the one from the movie, the other one, see in the most downvoted subthread here). I don’t want this in my life. It doesn’t feel ok to me to believe in that.
People here believe in utilitarianism (see comments of Sabs, he’s not alone in this), which usually makes people like me the “mere ripples”.
It would just feel weird: a peasant helping the master to deal with some issue together?
The world is not ready for it.
I’d love to be proved wrong though.
I have experience that it’s like this: I say something, polite, not polite, anything, related to this set of issues—I get downvoted or asked to “rephrase it in some way”.
What I really want is answers.
Like, the RX/TX balance of this conversation is: I sent a lot of stuff to EAs, got not much neaniful response.
So I stop.
I feel I have failed right here. I want somehow EA people talking to each other and finally deciding something together. Not to me.
I don’t really know. I’m not the one to ask :)
What is “EA-adjacent”? Well, we can come up with a some phrase for a definition. Then see how some corner cases don’t fit into that, extend the definition, repeat it a few times.
It would work for some phases of EA (like when there were only bed nets) but not for the future, it will need to be updated.
This seems to be mostly what people do here—dividing the world into concrete blocks with some structure on top.
That doesn’t answer any of the concerns, it’s so far away—creating some taxonomy of what’s EA and what’s not in EA...
What was the issue? That some people at Anthropic stopped informing us what’s going on. That the industry is kinda confused what to do, burned out, and some (with me) say radicalised into some “male warriors going bravely and gloriously into Valhalla full speed”. That there are so many issues with AI today (how to talk to the public? How to get help with this? How to stop current harm? What about regulation? Etc etc etc) that it seems that people tend to just ignore it all and focus on the shrimp and infinite ethics. I feel this lethargy and apathy too. Let’s not go there, this is has only one possible ending.
Let’s evaulate THAT.
It doesn’t matter how we define it.
Does the culture of OpenAI and EA intersect? Yes. A lot. Are they causally linked? Yes. A lot. Is Anthropic causally linked to all this as well? Yes. A lot.
Is something wrong over there? Yes. Definitely looks like it to me.
That’s all that matters. Since we’re (apparently) people who are supposed to do something about it. Let’s do it. Let’s finally do a debate about whether “ignoring issues today is acceptable”. Let’s discuss “what do we want Anthropic and maybe OpenAI to do”, let’s discuss “how can we get outside people to help”. Let’s finally discuss “whether red-pilled stuff is ok”
All of this that was ignored for decades apparently.
Can we please not put it under the rug?
About the discussion—ethicists are going to TV programs and it’s going pretty well. No “normie don’t understand” no, none of it. Working quite ok so far.
No need for “write your post in a format that I can parse with my RationalityParser9000. Syntax error on line 1, undefined entity ‘emotion’. Error. Loading shrimp welfare...” 💔
C’mon. Nothing to be afraid of. You really don’t need a tranny from Russia to lead you into a discussion about some next shit that’s about to blow in Silicon Valley . I’m pretty sure you can do it :)
Don’t ask me, I’m an immigrant here. The “minor inconvenience”, “a mere remainder, mere ripples” in someone’s utopia, an artifact in a render, a glitch, a fluke, a “disappointment to EA leaders seeing me”. I don’t know.
Ask other EAs:)
I’m asking seriously, because I feel what you say speaks to alot of people in Silicon Valley, so I ask this question to you and them in some way as well.
Concrete question (I don’t have much of that today)
Have you been to Europe?
I’m looking at this discourse since 2018, including when I was in EA and doing AI safety.
At no point I saw a discussion whether a big EA-adjacent org is net-positive or net-negative.
It’s some sort of a “blind spot”: we evaluate other people’s charities. But ours are, of course, pretty good.
I feel it’s time to have a discussion about this, that would be awesome.
To be more object-level,
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
I feel when you hear regulation you assume that there’s gonna be Putin-style regulation
Putin is not the only way. Not the 146%
EU is not the only thing about regulation that exists. Not the 30% (I don’t know. It’s a number not reflecting anything in particular I just made up)
JUST. 10. PERCENT.
Just to inform the patients of the mental health startup. Just add a bit of public oversight into AI. Just at least break up Insta and FB so they compete like they should Just rehire the Google ethics team and let them inform the public about biased and what to do, fix the biggest issues. Possibly done in a few months or so?
Just like a tiny winy bit will go so far.
At the same time you say “boosting growth” and also you’re for “breaking eggs to make an omelet (go big or go home, move fast and break things, those)”
So it’s like a train that is very fast and innovative. The people on the train are getting to their destination fast
The only issue is that the train is rolling over people chained to the tracks :)
And you are the train machinist and you say “progress!”
Well, in another life you are the one chained to the tracks :)
Can we just move like 10% slower
Again. In bold
JUST 10% SLOWER
CAN YOU HEAR ME OH YOU LIBERTARIAN
JUST 10%
Just a bit of regulation. Just enough to unchain the people.
And then I’m good with all you say.
Don’t discuss it with me! Discuss it with the community! :) I’m not an EA!!!
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
Your move.
This comment is the reason why I started this and the result of my post. I see it as a success.
So, can we have a larger discussion about this?
I am only one person. I did this post.
To do a bigger discussion, there needs to be more people.
I see you care about this.
2+2=...
Well, I feel the “red pill” part is directly relevant to alignment, both for current and long-term issues, the values that go into the AI part, and the power structure of the AI company that does it part.
I guess that’s why I included it into my post, don’t really know, I did it with mostly emotion and emotion is not well-interpretable always (sometimes for the best).
I do feel we (EA , tech, finance and related) need to discuss this as a community, the “red pill” stuff and whether it’s extreme (my experience n=1 and my interpretation of m=~100 other people says that yes, it’s a poorly and vaguely phrased partial theory that mostly explains how traumatic, unhappy, unhealthy relationships work (traumatized people are ones who will be most responsive to the “push-pull” pickup artistry, not because “this is how people are” but because “this is how traumatized people try to be happy and fail”), giving a phenomenological explanation with a completely wrong and actively harmful explanation of the underlying causes, with links to fascism and dehumanisation, agressiveness and fatalism)
Personally, I feel in a lot of cases this ideology is the reason people are unsuccessful in relationships: it is a fake cure for a problem that was probably “just” trauma and misunderstanding in the first place. Like, a society-wide misunderstanding between genders. Again, my personal view.
See my other comments about how “a society which is not aligned within itself is unlikely to be able to align other entities well”. Something as massive as this I believe should be addressed first before anything external can be taken care of
Same reason I feel the discussion “apple&android vs Nokia&fxtec” in another thread is very very very directly relevant to alignment, again, both power structure-wise and values themselves-wise.
Don’t really know to best do such a discussion, again, I’m only one person, I don’t really know :)
I am tired. I want a vacation from all this.
I have hope in the community that they are smart and capable and can sort these things through.
This analysis seems to be considering only the future value, ignoring current value. How does it address current issues, like ones here?
Why does a small secretive group of ppl who plan to do some sort of a “world AI revolution” that brings “UBI” (without much plan on how exactly) is by-default considering itself “good”
I’m one of those who was into this secretive group of people before, only to see how much there is on the outside.
Not everyone think what currently is is “good by-default”
Goodness comes from participation, listening, talking to each other. Not necessarily from some moral theory.
I call to discuss this plan with larger public. I think it will go well and I have evidence for this if you’re interested.
Thank you.
Oh and Sabs, why do you consider your own utopia an insult and a danger, something that I might get blocked for for point it out?
Well then, to the mods: I don’t like utilitarianism, I was hurt by it and I feel it’s well within my rights to show why utilitarianism might be not ok, with a personal example for Sabs.
And if you ban me: I don’t want to be a part of community that says “it’s normal to ignore suffering of many people, if they’re not everyone, just select groups”
This would make it an official statement from EA. We all feel it’s like this, but legit evidence is even better.
There will be no more editing. I have done quite a lot in this direction (not on the EA forum). I have experience in political movements—when one does so much but the community is still not “getting it”, the solution is for the community to figure things out for itself. Maybe after all I am wrong?
This isn’t a school assignment. Your grade on my post is meaningless.
What does make sense is how you feel about the problem itself and what you will do.
Well, I ate and decided to replace it with “let’s discuss it” altogether :)