https://sergia-ch.github.io/
About the last posts:
I had a health crisis
I still agree with some of the points I made
In short, I’d make it more constructive but I’d still raise the question about Anthropic and the AI race
https://sergia-ch.github.io/
About the last posts:
I had a health crisis
I still agree with some of the points I made
In short, I’d make it more constructive but I’d still raise the question about Anthropic and the AI race
Correction: “now” replaced by “the sooner the better” :)
Those who downvote:
Here’s my number +41 78 732-01-34 Here’s my email: sergia94@protonmail.com
Here’s my address:
Langstrasse 213 Room 43c 8005 Zurich Switzerland
Tell us why, now. Or be quiet from this point on forever
I’m mostly leaving—I have said basically the same thing over and over again. In a polite way. In a more twitter way like here. The Concerned EAs said it in an EA forum style post. At this point, I see that I have done everything that I could.
In general, I feel this conversation between leftists and EAs fails because of holistic vs logical reasoning.
EAs tend to believe that “logical reasoning” is superior to “holistic reasoning”
Holistic reasoning is good at spotting patterns. Logic is good at disproving the patterns exist, by cutting the thing into a taxonomy with a sharp razor of this logical reasoning.
What I’m presenting is a pattern. Of course it’s not a completely correct statement in a logical sense: I don’t really know how it is. All I say is: I’m concerned that I see it.
What I get in response is not “sure, let’s brainstorm, steelman it”. Or better “sure, let’s see if it’s a valid pattern by taking action and see if the response is consistent with the pattern (active exploration, needed to uncover the causal structure. I did a thesis on this). It’s not possible to obtain the correct model from observation alone in some cases. Neither internal monologue about the observation would help the agent. Only action and testing the hypothesis”
I feel it’s like a paper review, not beanstormig. “Everything wrong with this in a 1000 words” rather than “here’s what I agree with. Here’s what I disagree with. Here’s my evidence”.
That’s why it feels not friendly for me to talk to EAs. I tried it soo sooo many times, every time my pattern that is connected to my personality gets dissected, traumatizing and invalidating me. Questioning my very axioms that I have discovered with sweat and pain, rather than saying whether they agree with a statement itself.
Questioning my holistic thinking that lives with the logical one quite well together for me.
To me some posts read us a call for me to abandon my emotional side and use only logic (such as, libertarian utilitarianism by Sabs or “red pill is OK in EA” in the most downvoted subthread here)
I want balance, I want both. To me “only logic, only one way of thinking” is very extreme. You saw my other posts—I can do mathy things too. Just, I believe that if I only see and do mathy things, my world model would be incomplete, my exploration would be doomed, and my prior will not be universal. It would be “intelligence used to shoot itself in the foot”
There is no way to explain it. It just happened to me once that I met a person who showed this to me, reminded me of something I forgot. The dream I had of “math being superior to humanities and such” collapsed, like in the Inception movie.
Where I am now feels better. I can feel again. I feel whole. And I can still do math. I am sometimes uncomfortable with using math without emotion. Like, taking about genocide and saying “it’s not significant because the number of dead people is low”
It’s not the way.
See my post about the matrix trilogy: https://dair-community.social/@sergia/109979018708128244 and about how sometimes the seeming necessity of a choice is the problem, an artifact of the decision-making system rather than an objective necessity.
I see emotion as something helping to see a bit beyond the tunnel vision of logic.
It all changed for me when I saw the intensity of suffering of just one person who is not like me. I couldn’t think of it as pure numbers or “mere ripples”. Feeling what they feel.
I don’t want this—never again anyone has to feel this bad.
I reject utilitarianism : I am from an ex-communist country which loved utilitarianism (and still does: Putin says “I’m going to save the world from the evil west, but to do so we’re gonna need to kill a lot of people”—people believe in this there!)
I believe this can be done, caring about everyone, at least, I will do my best. That doesn’t require any viokence I believe, not at all. Those who did agressive violence in the name of peace were utilitarians! Lenin and those people. F them :) they totally didn’t get anything I think.
See more on why I believe so here: https://dair-community.social/@sergia/109977128036067592
(Side note: it is at the same time surprising but I feel this many times so it’s not surprising at all now: a complete and total lack of interest from EAs towards asking questions about my story, the actual evidence I have. Like, nobody cares what happened in real-life with real people—only if there’s some mathy philosophy anything is worth anything 💔💔💔)
That can’t be explained, no matter how many walls of text I write. That one life is worth more than any philosophy or any belief system or any ideology, anything like that
I will say this out loud: I am extremely concerned that a secretive group of people who see superiority as acceptable (men over women in “red pill”, majority vs minority in “utilitarianism”, big over small in “total libertarianism”) are engineering an “AI revolution” with manifestos and promises of “UBI”, with no plan how to get there, only promises and handwaving, and ignore current crimes and harms done by them in the face of some “greater good” they imagine in their utopian manifestos.
If we replace “red pill” with “male supremacy”, “libertarianism” with “communism” and “AI” with “collectivism”, and “UBI” well with “UBI”, we have a clear parallel between now and a 100 years ago. first in Moscow now in silicon valley. Both were not nice to women (and had a “theory of all theories” to justify that), not nice to minorities (both saw them as a nuisance, didn’t believe ever that “people are different”, saw everyone the same because they themselves never met anyone not like them, of course, with a “theoretical justification”). Both imagined some “grand future” for which it was acceptable (and kinda encouraged) to do crimes in the present.
The story a 100 years ago went like this: the Red Communists, the most utilitarian ones, killed everyone who had concerns over the validity of their philosophy.
If it comes to this, you’ll see that I’ll not defend myself. I don’t want to be in a world dominated by utilitarians. All full of utopias in their heads, and actual real poor people on the streets. All imagined, all fake. I’ll do my peaceful thing. That is it.
Just one life is more important than all those mental sandcastles.
That is it—that is goodbye.
There were some nice things, some nice people I met in EA. I still want to talk to them
In general, this is a goodbye.
Goodbye!
I would totally love to talk “real”. A conversation that doesn’t feel I talk to a sociopath who would rather believe in some philosophy than even try to help real people.
The real thing is:
it takes 10 minutes to write a question to Anthropic
there’s potential upside
there’s no potential downside in asking a question
Instead people here ask “well shall I even spend 1 second on making sense of it, or is it all total bs”?
It’s not a school assignment. It’s like you say “oh teacher the assignment seems contrarctory” and the teacher is like “oh sure that’s a typo”. I’m not here to fix typos. I did it and it didn’t lead to answers.
It’s real world. Nobody knows the full truth, there’s no one true theory, there’s no reward for an assignment and there’s no assignment.
Since some of you are in a country where culture war is going up, I’d strongly recommend to learn from partially correct information—inferring emotion instead of “parse error on line 1”.
Then we can heal it. There’s not one single person who’s perfect.
What does seem odd is completely downvoting a person for asking questions to the core of a philosophy.
What does seem odd is seeing a post advocating for “Even More Centralisation” after all that has transpired and all that has been said. It’s heartbreaking to me because I know where it leads. I have that experience. For you it’s a “map”, I have memory of real territory. For you it’s “form”, for me it has meaning.
Please reach out to someone who knows the territory. Someone outside of this group that believes they have answers to everything...
Save yourself. Really.
Do you believe in it?
Just seems weird if someone said “to be safe from a deadly disease, what we really need is to develop it as soon as we can”
I get that the metaphore has holes, just, seems a bit “out there”.
I’d say that “to have safe agi, we need to do agi engineering the fastest way possible” is a very extraordinary claim.
It requires very extraordinary evidence to support it.
My thing which is “can we ask them to explain it” seems like a very ordinary claim to me.
So it doesn’t require much evidence at all.
Yes, it’s about me, I’m a trans girl from Russia. Yes I’m saying that it would be weird to me if I do something with the EA community.
People here believe it’s ok to believe in “red pill” (not the one from the movie, the other one, see in the most downvoted subthread here). I don’t want this in my life. It doesn’t feel ok to me to believe in that.
People here believe in utilitarianism (see comments of Sabs, he’s not alone in this), which usually makes people like me the “mere ripples”.
It would just feel weird: a peasant helping the master to deal with some issue together?
The world is not ready for it.
I’d love to be proved wrong though.
I have experience that it’s like this: I say something, polite, not polite, anything, related to this set of issues—I get downvoted or asked to “rephrase it in some way”.
What I really want is answers.
Like, the RX/TX balance of this conversation is: I sent a lot of stuff to EAs, got not much neaniful response.
So I stop.
I see downvotes after my other post. Is this a “halo effect”? :)
Can there be objective feedback?
Or, how is this post linked to something else I said people mostly don’t like here apparently?
Thank you.
I see downvotes after my other post. Is this a “halo effect”? :)
Can there be objective feedback?
Or, how is the proposal linked to something else I said people mostly don’t like here apparently?
Thank you.
I feel I have failed right here. I want somehow EA people talking to each other and finally deciding something together. Not to me.
I don’t really know. I’m not the one to ask :)
What is “EA-adjacent”? Well, we can come up with a some phrase for a definition. Then see how some corner cases don’t fit into that, extend the definition, repeat it a few times.
It would work for some phases of EA (like when there were only bed nets) but not for the future, it will need to be updated.
This seems to be mostly what people do here—dividing the world into concrete blocks with some structure on top.
That doesn’t answer any of the concerns, it’s so far away—creating some taxonomy of what’s EA and what’s not in EA...
What was the issue? That some people at Anthropic stopped informing us what’s going on. That the industry is kinda confused what to do, burned out, and some (with me) say radicalised into some “male warriors going bravely and gloriously into Valhalla full speed”. That there are so many issues with AI today (how to talk to the public? How to get help with this? How to stop current harm? What about regulation? Etc etc etc) that it seems that people tend to just ignore it all and focus on the shrimp and infinite ethics. I feel this lethargy and apathy too. Let’s not go there, this is has only one possible ending.
Let’s evaulate THAT.
It doesn’t matter how we define it.
Does the culture of OpenAI and EA intersect? Yes. A lot. Are they causally linked? Yes. A lot. Is Anthropic causally linked to all this as well? Yes. A lot.
Is something wrong over there? Yes. Definitely looks like it to me.
That’s all that matters. Since we’re (apparently) people who are supposed to do something about it. Let’s do it. Let’s finally do a debate about whether “ignoring issues today is acceptable”. Let’s discuss “what do we want Anthropic and maybe OpenAI to do”, let’s discuss “how can we get outside people to help”. Let’s finally discuss “whether red-pilled stuff is ok”
All of this that was ignored for decades apparently.
Can we please not put it under the rug?
About the discussion—ethicists are going to TV programs and it’s going pretty well. No “normie don’t understand” no, none of it. Working quite ok so far.
No need for “write your post in a format that I can parse with my RationalityParser9000. Syntax error on line 1, undefined entity ‘emotion’. Error. Loading shrimp welfare...” 💔
C’mon. Nothing to be afraid of. You really don’t need a tranny from Russia to lead you into a discussion about some next shit that’s about to blow in Silicon Valley . I’m pretty sure you can do it :)
Don’t ask me, I’m an immigrant here. The “minor inconvenience”, “a mere remainder, mere ripples” in someone’s utopia, an artifact in a render, a glitch, a fluke, a “disappointment to EA leaders seeing me”. I don’t know.
Ask other EAs:)
I’m asking seriously, because I feel what you say speaks to alot of people in Silicon Valley, so I ask this question to you and them in some way as well.
Concrete question (I don’t have much of that today)
Have you been to Europe?
I’m looking at this discourse since 2018, including when I was in EA and doing AI safety.
At no point I saw a discussion whether a big EA-adjacent org is net-positive or net-negative.
It’s some sort of a “blind spot”: we evaluate other people’s charities. But ours are, of course, pretty good.
I feel it’s time to have a discussion about this, that would be awesome.
To be more object-level,
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
I feel when you hear regulation you assume that there’s gonna be Putin-style regulation
Putin is not the only way. Not the 146%
EU is not the only thing about regulation that exists. Not the 30% (I don’t know. It’s a number not reflecting anything in particular I just made up)
JUST. 10. PERCENT.
Just to inform the patients of the mental health startup. Just add a bit of public oversight into AI. Just at least break up Insta and FB so they compete like they should Just rehire the Google ethics team and let them inform the public about biased and what to do, fix the biggest issues. Possibly done in a few months or so?
Just like a tiny winy bit will go so far.
At the same time you say “boosting growth” and also you’re for “breaking eggs to make an omelet (go big or go home, move fast and break things, those)”
So it’s like a train that is very fast and innovative. The people on the train are getting to their destination fast
The only issue is that the train is rolling over people chained to the tracks :)
And you are the train machinist and you say “progress!”
Well, in another life you are the one chained to the tracks :)
Can we just move like 10% slower
Again. In bold
JUST 10% SLOWER
CAN YOU HEAR ME OH YOU LIBERTARIAN
JUST 10%
Just a bit of regulation. Just enough to unchain the people.
And then I’m good with all you say.
Don’t discuss it with me! Discuss it with the community! :) I’m not an EA!!!
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
Your move.
This comment is the reason why I started this and the result of my post. I see it as a success.
So, can we have a larger discussion about this?
I am only one person. I did this post.
To do a bigger discussion, there needs to be more people.
I see you care about this.
2+2=...
Well, I feel the “red pill” part is directly relevant to alignment, both for current and long-term issues, the values that go into the AI part, and the power structure of the AI company that does it part.
I guess that’s why I included it into my post, don’t really know, I did it with mostly emotion and emotion is not well-interpretable always (sometimes for the best).
I do feel we (EA , tech, finance and related) need to discuss this as a community, the “red pill” stuff and whether it’s extreme (my experience n=1 and my interpretation of m=~100 other people says that yes, it’s a poorly and vaguely phrased partial theory that mostly explains how traumatic, unhappy, unhealthy relationships work (traumatized people are ones who will be most responsive to the “push-pull” pickup artistry, not because “this is how people are” but because “this is how traumatized people try to be happy and fail”), giving a phenomenological explanation with a completely wrong and actively harmful explanation of the underlying causes, with links to fascism and dehumanisation, agressiveness and fatalism)
Personally, I feel in a lot of cases this ideology is the reason people are unsuccessful in relationships: it is a fake cure for a problem that was probably “just” trauma and misunderstanding in the first place. Like, a society-wide misunderstanding between genders. Again, my personal view.
See my other comments about how “a society which is not aligned within itself is unlikely to be able to align other entities well”. Something as massive as this I believe should be addressed first before anything external can be taken care of
Same reason I feel the discussion “apple&android vs Nokia&fxtec” in another thread is very very very directly relevant to alignment, again, both power structure-wise and values themselves-wise.
Don’t really know to best do such a discussion, again, I’m only one person, I don’t really know :)
I am tired. I want a vacation from all this.
I have hope in the community that they are smart and capable and can sort these things through.
This analysis seems to be considering only the future value, ignoring current value. How does it address current issues, like ones here?
Why does a small secretive group of ppl who plan to do some sort of a “world AI revolution” that brings “UBI” (without much plan on how exactly) is by-default considering itself “good”
I’m one of those who was into this secretive group of people before, only to see how much there is on the outside.
Not everyone think what currently is is “good by-default”
Goodness comes from participation, listening, talking to each other. Not necessarily from some moral theory.
I call to discuss this plan with larger public. I think it will go well and I have evidence for this if you’re interested.
Thank you.
Well, I ate and decided to replace it with “let’s discuss it” altogether :)