AI safety
had a health crisis
Getting back on track
sergeivolodin
Do you believe in it?
Just seems weird if someone said “to be safe from a deadly disease, what we really need is to develop it as soon as we can”
I get that the metaphore has holes, just, seems a bit “out there”.
I’d say that “to have safe agi, we need to do agi engineering the fastest way possible” is a very extraordinary claim.
It requires very extraordinary evidence to support it.
My thing which is “can we ask them to explain it” seems like a very ordinary claim to me.
So it doesn’t require much evidence at all.
Yes, it’s about me, I’m a trans girl from Russia. Yes I’m saying that it would be weird to me if I do something with the EA community.
People here believe it’s ok to believe in “red pill” (not the one from the movie, the other one, see in the most downvoted subthread here). I don’t want this in my life. It doesn’t feel ok to me to believe in that.
People here believe in utilitarianism (see comments of Sabs, he’s not alone in this), which usually makes people like me the “mere ripples”.
It would just feel weird: a peasant helping the master to deal with some issue together?
The world is not ready for it.
I’d love to be proved wrong though.
I have experience that it’s like this: I say something, polite, not polite, anything, related to this set of issues—I get downvoted or asked to “rephrase it in some way”.
What I really want is answers.
Like, the RX/TX balance of this conversation is: I sent a lot of stuff to EAs, got not much neaniful response.
So I stop.
I got a bit disappointing answers on LW—people telling me how “this will not have impact” instead of answering the questions :) seriously, I think that it’s a 15 minute problem for someone who knows theoretical CS well. It could have some impact on a very hard problem. Not the best option probably, but what is better?
Isn’t it easier to spend 15 minutes to work on a CS theory problem, meeting new ppl, learning something ,instead of coming up with a long explanation of “why this is not the best choice”?
I’m a feminist but I’ll give a trad cis example to illustrate this because I don’t expect a feminist one to go well here (am I wrong?). In How I Met Your Mother the womanizer character Barney Stinson once had an issue. Women were calling him every minute and wanting to meet him. He couldn’t choose which one is “the best” choice. As a result he didn’t get to know any of them.
https://m.youtube.com/watch?v=_twv2L_Cogo
I feel it’s the same—so much energy spent on “if it’s the best thing to do” that even 15 minutes will not be spent on something new. Illusion of exploration—not actually trying the new thing but rather just quickly explaining why it’s “not the best”, spending most of the time “computing the best thing” and not actually doing it...
Am I not seeing it right? Am I missing something?
Well, I ate and decided to replace it with “let’s discuss it” altogether :)
Correction: “now” replaced by “the sooner the better” :)
Those who downvote:
Here’s my number +41 78 732-01-34 Here’s my email: sergia94@protonmail.com
Here’s my address:
Langstrasse 213 Room 43c 8005 Zurich Switzerland
Tell us why, now. Or be quiet from this point on forever
Question: would an impactful but not cool/popular/elegant topic interest you? What’s your balance between coolness and impactfulness?
I’m looking at this discourse since 2018, including when I was in EA and doing AI safety.
At no point I saw a discussion whether a big EA-adjacent org is net-positive or net-negative.
It’s some sort of a “blind spot”: we evaluate other people’s charities. But ours are, of course, pretty good.
I feel it’s time to have a discussion about this, that would be awesome.
I acknowledge and agree with your criticism.
I did question these assumptions (“we do capabilities to increase career capital, and somehow stay in this phase almost forever” and such) since 2020 in the field, talking to people directly. The reactions and disregard I got is the reason I feel the way I feel about all this.
I was thinking “yes, I am probably just not getting it, I will ask politely”. The replies I got were what’s causally preceding me feeling this way.
I am traumatized and I don’t want to engage fully logically here, because I feel pain when I do that. I was writing a lot of logical texts and saying logical things, only to be dismissed kinda, like “you’re not getting it, we are going to the top of this, maybe you need to be more comfortable with power” or something like this.
Needless to say, I have pre-existing trauma about a similar theme from childhood, family etc.
I do not pretend to be an objective EA doing objective things. After all, we don’t have much objective evidence here except for news articles about Anthropic 🤷♀️
So, what I’m doing here is simply expressing how I feel, expressing that I feel a bit powerless about this problem, and asking for help in solving it, inquiring about it, and making sure something is done.
I can delete my post if there is a better post and the community thinks my post is not helpful.
I want to start a discussion, but all I have is a traumatized mind tired of talking about it, which tried every possible measure I could think of.
I leave it up to you, the community, people here to decide—post a new post, ignore it, keep this one and the new one, or only the new one, or write Anthropic people directly, or go to the news, or ask them on Twitter, or anything you can think of—I do not have the mental capacity to do it.
All I can is to write that I feel bad about it, that I’m tired, that I don’t feel my CS skills would be used for good if I joined AIS research today, that I’m disillusioned, and that I ask the community, people who feel the same, to do something if they want to.
I do not claim factual accuracy or rationality metrics. Just raw experience, for you to serve as a starting point in your own actions about this, if you are interested.
My mind now can do talks about feelings, so I talk about feelings. I think feelings are good way to express what I want to say. So I went with this.
That is all. That is all I do here. Thank you.
This analysis seems to be considering only the future value, ignoring current value. How does it address current issues, like ones here?
Why does a small secretive group of ppl who plan to do some sort of a “world AI revolution” that brings “UBI” (without much plan on how exactly) is by-default considering itself “good”
I’m one of those who was into this secretive group of people before, only to see how much there is on the outside.
Not everyone think what currently is is “good by-default”
Goodness comes from participation, listening, talking to each other. Not necessarily from some moral theory.
I call to discuss this plan with larger public. I think it will go well and I have evidence for this if you’re interested.
Thank you.
I’m asking seriously, because I feel what you say speaks to alot of people in Silicon Valley, so I ask this question to you and them in some way as well.
Concrete question (I don’t have much of that today)
Have you been to Europe?
To be more object-level,
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
I feel when you hear regulation you assume that there’s gonna be Putin-style regulation
Putin is not the only way. Not the 146%
EU is not the only thing about regulation that exists. Not the 30% (I don’t know. It’s a number not reflecting anything in particular I just made up)
JUST. 10. PERCENT.
Just to inform the patients of the mental health startup. Just add a bit of public oversight into AI. Just at least break up Insta and FB so they compete like they should Just rehire the Google ethics team and let them inform the public about biased and what to do, fix the biggest issues. Possibly done in a few months or so?
Just like a tiny winy bit will go so far.
Don’t discuss it with me! Discuss it with the community! :) I’m not an EA!!!
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
Your move.
Oh and Sabs, why do you consider your own utopia an insult and a danger, something that I might get blocked for for point it out?
Well then, to the mods: I don’t like utilitarianism, I was hurt by it and I feel it’s well within my rights to show why utilitarianism might be not ok, with a personal example for Sabs.
And if you ban me: I don’t want to be a part of community that says “it’s normal to ignore suffering of many people, if they’re not everyone, just select groups”
This would make it an official statement from EA. We all feel it’s like this, but legit evidence is even better.
It is rambling and incoherent. See why here: https://forum.effectivealtruism.org/posts/bmfR73qjHQnACQaFC/call-to-demand-answers-from-anthropic-about-joining-the-ai?commentId=EaBHtEpJCEv4HnQky
It’s a part of what I’m talking about here
My emotional state is relevant here. I’m one of the people who was excited about safety. Then I slowly was seeing how the plan is shaky and decisions are controversial (advertise OpenAI jobs, do the “first we get a lot of capabilities skills the do safety” which usually means a capabilities person with an EA t-shirt and not much safety).
My emotional state summarises the history that happened to me. It is relevant to my case: I am showing you how you would feel if you went through my experience, in case if you choose to believe it.
It’s not a “side note”, it’s my evidence I’m showing to say “I have concerns and this feels off, rather a pattern than a one-off case”. Emotions are good for holistic reasoning.
I don’t have the energy to write a “full-fledged EA post with dots over is and all that”. I mean, I feel I’m one of the “plaintiffs” in this case. I believed EA, I trusted all those forum posts. Now I see something is wrong. I am simply asking other people to look into this, say how they feel and think about this.
So we figure out something together.
I feel I need support. No, I don’t want to go to the FB mental health EA support group, because this is not about mental health specifically—it’s about how the field of AI safety is. It’s not that “I feel bad because of a chemical imbalance in my mind”. I feel bad because I see bad things :)
I have written at length on Twitter about my experiences. If you’re still interested, I can link it a bit later.
For a traumatized person it’s painful to go though all this again and again.
Thank you.