Can people get real on this forum? Like, there are discussions about some ethical theory, infinite ethics or smth. Yet, right now, today, something fishy is going on. How can there be future without the present?
Your central question appears interesting and important to me: Has Anthropic joined the arms race for advanced AI? If yes, why?
(And taking a by default conflict-theoretic stance toward new AI startups is perhaps good, based on the evidence one has received via DeepMind/OpenAI).
So, I’d join in in the call for asking e.g. Anthropic (but also other startups like Conjecture, Adept AI and Aligned AI) for their plans to avoid race dynamics, and their current implementation. However, I believe it’s not very likely that especially Anthropic will comment on this.
However, your post is mostly not fleshing out the question, but instead not-quite-attacking-but-also-not-not-attacking Anthropic (“Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil”) and not fully fleshing out the reasons why you’re asking the question (“I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well”), but instead talking a lot about your emotional state a lot. (I don’t think that talking about your emotional state a lot is necessarily bad, but I’d like accusations, questions and statements about emotion to be separated if possible.)
See my thread for more questions. I feel traumatized by EA, by this duplicity (that I have seen “rising up” before this, see my other threads). I’m searching for a job and I’m scared of people. Because this is not the first time, not at all. Somehow tech people are “number one” at this. And EA/tech people seem to be “number 0”, even better at Machiavellianism and duplicity than Peter Thiel or Musk. At least, Musk openly says he’s “red-pilled” and talks to Putin. What EA/safety is doing is kinda similar but hidden under the veil of “safety”.
I don’t understand this paragraph, for example. Why do you believe that EA/tech people are better at Machiavellianism than those two? Who exactly is EA/tech people here, that would be good to know.
My emotional state is relevant here. I’m one of the people who was excited about safety. Then I slowly was seeing how the plan is shaky and decisions are controversial (advertise OpenAI jobs, do the “first we get a lot of capabilities skills the do safety” which usually means a capabilities person with an EA t-shirt and not much safety).
My emotional state summarises the history that happened to me. It is relevant to my case: I am showing you how you would feel if you went through my experience, in case if you choose to believe it.
It’s not a “side note”, it’s my evidence I’m showing to say “I have concerns and this feels off, rather a pattern than a one-off case”. Emotions are good for holistic reasoning.
I don’t have the energy to write a “full-fledged EA post with dots over is and all that”. I mean, I feel I’m one of the “plaintiffs” in this case. I believed EA, I trusted all those forum posts. Now I see something is wrong. I am simply asking other people to look into this, say how they feel and think about this.
So we figure out something together.
I feel I need support. No, I don’t want to go to the FB mental health EA support group, because this is not about mental health specifically—it’s about how the field of AI safety is. It’s not that “I feel bad because of a chemical imbalance in my mind”. I feel bad because I see bad things :)
I have written at length on Twitter about my experiences. If you’re still interested, I can link it a bit later.
For a traumatized person it’s painful to go though all this again and again.
I think the thing you’re expressing is fine, and reasonable to be worried about. I think Anthropic should be clear about their strategy. The Google investment does give me pause, and my biggest worry about Anthropic (as with many people, I think) has always been that their strategy could ultimately lead to accelerating capabilities more than alignment.
I just don’t think this post expressed that thing particularly well, or in a way I’d expect or want Anthropic to feel compelled to respond to. My preferred version of this would engage with reasons in favor of Anthropic’s actions, and how recent actions have concretely differed from what they’ve stated in the past.
My understanding of (part of) their strategy has always been that they want to work with the largest models, and sometimes release products with the possibility of profiting off of them (hence the PBC structure rather than a nonprofit). These ideas also sound reasonable (but not bulletproof) to me, so I consequently didn’t see the Google deal as a sudden change of direction or backstab—it’s easily explainable (although possibly concerning) in my preexisting model of what Anthropic’s doing.
So my objection is jumping to a “demand answers” framing, FTX comparisons, and accusations of Machiavellian scheming, rather than an “I’d really like Anthropic to comment on why they think this is good, and I’m worried they’re not adequately considering the downsides” framing. The former, to me, requires significantly more evidence of wrongdoing than I’m aware of or you’ve provided.
I did question these assumptions (“we do capabilities to increase career capital, and somehow stay in this phase almost forever” and such) since 2020 in the field, talking to people directly. The reactions and disregard I got is the reason I feel the way I feel about all this.
I was thinking “yes, I am probably just not getting it, I will ask politely”. The replies I got were what’s causally preceding me feeling this way.
I am traumatized and I don’t want to engage fully logically here, because I feel pain when I do that. I was writing a lot of logical texts and saying logical things, only to be dismissed kinda, like “you’re not getting it, we are going to the top of this, maybe you need to be more comfortable with power” or something like this.
Needless to say, I have pre-existing trauma about a similar theme from childhood, family etc.
I do not pretend to be an objective EA doing objective things. After all, we don’t have much objective evidence here except for news articles about Anthropic 🤷♀️
So, what I’m doing here is simply expressing how I feel, expressing that I feel a bit powerless about this problem, and asking for help in solving it, inquiring about it, and making sure something is done.
I can delete my post if there is a better post and the community thinks my post is not helpful.
I want to start a discussion, but all I have is a traumatized mind tired of talking about it, which tried every possible measure I could think of.
I leave it up to you, the community, people here to decide—post a new post, ignore it, keep this one and the new one, or only the new one, or write Anthropic people directly, or go to the news, or ask them on Twitter, or anything you can think of—I do not have the mental capacity to do it.
All I can is to write that I feel bad about it, that I’m tired, that I don’t feel my CS skills would be used for good if I joined AIS research today, that I’m disillusioned, and that I ask the community, people who feel the same, to do something if they want to.
I do not claim factual accuracy or rationality metrics. Just raw experience, for you to serve as a starting point in your own actions about this, if you are interested.
My mind now can do talks about feelings, so I talk about feelings. I think feelings are good way to express what I want to say. So I went with this.
I downvoted this post because it felt rambling and not very coherent (no offence). You can fix it though :-).
I would also be in favour in having more information on their plan.
The EA Corner Discord might be a better location to post things like that are very raw and unfiltered. I often post things to a more casual location first, then post an improved version either here or on Less Wrong. For example, I often use Facebook or Twitter for this purpose.
There will be no more editing. I have done quite a lot in this direction (not on the EA forum). I have experience in political movements—when one does so much but the community is still not “getting it”, the solution is for the community to figure things out for itself. Maybe after all I am wrong?
This isn’t a school assignment. Your grade on my post is meaningless.
What does make sense is how you feel about the problem itself and what you will do.
I mean I don’t even understand how you feel. It’s just vague amounts of upsetness and trauma and a want for Anthropic to respond? I think people just don’t share your feelings and find your feelings incongruent with how they view the empirical facts. Like even in this thread you can’t decide between wanting releasing of models and “public participation”. Then you also say these models cause current day harms (Claude isn’t released yet?). While also citing people whose ethics are just open sourcing and releasing it (e.g. Huggingface’s Dall-E Mini didn’t even have a pornography blocker for the first few days).
I think you say you want a discussion about Anthropic (this has been done quite a lot on the forum) but then you give no way to do so. Then anytime the discussion disagrees with you, you retreat back to justifying the post by saying it’s “trauma” and “your grade on my post doesn’t matter”.
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
Or maybe people love Peter Thiel, Musk and red pills here? In this case, I guess, there’s not much to discuss. At least I expected some answers. It’s as if people don’t even bother to explain what is being done—just assumed to be “correct”?
In addition, we are issuing a warning to sergia, for this and other comments. Sergia, please read the EA Forum norms post and, if you’re in doubt of whether your comment is meeting those norms, please wait for a while and revise your comment.
This subthread seems to be going in a bad direction. I would encourage those wanting to discuss the net-value of Elon Musk and Peter Thiel on the world to do so elsewhere.
Well, I feel the “red pill” part is directly relevant to alignment, both for current and long-term issues, the values that go into the AI part, and the power structure of the AI company that does it part.
I guess that’s why I included it into my post, don’t really know, I did it with mostly emotion and emotion is not well-interpretable always (sometimes for the best).
I do feel we (EA , tech, finance and related) need to discuss this as a community, the “red pill” stuff and whether it’s extreme (my experience n=1 and my interpretation of m=~100 other people says that yes, it’s a poorly and vaguely phrased partial theory that mostly explains how traumatic, unhappy, unhealthy relationships work (traumatized people are ones who will be most responsive to the “push-pull” pickup artistry, not because “this is how people are” but because “this is how traumatized people try to be happy and fail”), giving a phenomenological explanation with a completely wrong and actively harmful explanation of the underlying causes, with links to fascism and dehumanisation, agressiveness and fatalism)
Personally, I feel in a lot of cases this ideology is the reason people are unsuccessful in relationships: it is a fake cure for a problem that was probably “just” trauma and misunderstanding in the first place. Like, a society-wide misunderstanding between genders. Again, my personal view.
See my other comments about how “a society which is not aligned within itself is unlikely to be able to align other entities well”. Something as massive as this I believe should be addressed first before anything external can be taken care of
Same reason I feel the discussion “apple&android vs Nokia&fxtec” in another thread is very very very directly relevant to alignment, again, both power structure-wise and values themselves-wise.
Don’t really know to best do such a discussion, again, I’m only one person, I don’t really know :)
I am tired. I want a vacation from all this.
I have hope in the community that they are smart and capable and can sort these things through.
I understand that downvotes can be hurtful – but afaik the post has been up for 45min, so maybe it would be a good idea to wait a bit before reading too much into the reaction/non-reaction?
personally I love Thiel & Musk and think they’ve been massive net positives for the world!
Strong agree with Musk (undecided on Thiel), and it frustrates me so much that people on this forum casually dismiss him. I would go so far as to say I think he’s been a much bigger net positive than much if not all of the EA movement—massively improving our prospects from climate change, and reducing existential risk by moving us towards being multiplanetary as fast as possible.
The standard counterarguments seem to be ‘bunkers > planets’, ‘AI makes being multiplanetary irrelevant’, and ‘climate change isn’t a big deal so Tesla doesn’t matter’. I think all three of these arguments are a) probably wrong and more importantly b) almost completely unargued for.
I’m unclear who I feel has the burden of proof on such issues. In some sense burden of proof is a silly concept here, but in another I feel like it’s very important. When 80k et al regularly talk people out of becoming engineers to go into AI safety research or similar, a view which is then often picked up by the wider community, it seems very important that those same EAs should put serious thought into counterfactuals .
well clearly Musk is much better than all the EAs, he built these massive multi-billion-dollar companies and created loads of value on the way! We’re going back to space with Elon! How cool is that? If you disagree, well, ok, I guess that’s a very bold take considering the stock market’s opinion....
re EVs, agree as well, even if you don’t believe the climate stuff (I do w/ some caveats) then Teslas are very beautiful, great cars and almost certainly good for the world on other dimensions (i.e less local pollution in urban areas etc)
How do you feel about the “red pill” they seem to embrace (Musk openly and Thiel by evidence)? Do you feel this worldview affects their actions? Do you think it is extreme? Which political affiliation does “red pill” seem to belong to—left or right? Do you believe in those “sexual markets” stuff? Thank you for your replies.
I would have upvoted but for the red pill paragraph, which seemed needlessly uncharitable to Thiel and Musk. Your comment here seems more like it’s spoiling for a fight than looking for a discussion.
IIRC Musk once tweeted ‘take the red pill’ with no context, a phrase which traditionally referred to any instance of people having a radical perspective shift. When asked, he said he didn’t know about the pick up artistry subgroup of the same name. I see no reason to disbelieve this, and I haven’t heard him say anything particularly in line with their views elsewhere.
The red pill philosophy is broadly associated with—though strictly unrelated to—right wing politics. What does that have to do with anything? Plenty of EAs are right wing. It’s not a pejorative.
To sum up my other comment, yes, I want to confront you with normalizing red pill. I think it’s fascist and dehumanising.
Yes, I also think it’s relevant to AI alignment, because a community that is not aligned itself, that is “at war” between it’s own genders (tech people), is unlikely to align something else well.
Saying this as a person from a fascist country who kinda supports an ex-fascist politician trying to do better and be kinder (see Navalny)
Saying this as a sexual abuser and mentally abused.
Saying this as one who apologized and saw that what I did was wrong. And one who now sees how stupid and unnecessary it was.
Saying this as one who talked to pro-Putin people a lot to understand how this all works.
There are ways to have both emotion and logic at peace and harmony. Together. Not at war.
Which “right wing” do you mean? I think it was about “small government” (but not “zero government”).
How is “red pill” related to “small government”? :)
You’re using the other “right wing”, which is something related to traditional family. That is one step there—a patriarch in the family. “Red pill” is asserting that it has enough explanatory power to overwhelm the aspect of free will in decisions of women and men, that the “sexual market” is a more clear explanation for how relationships go.
I’d say it’s a bit of an extreme step, because it claims a single simple objective for the whole of humanity: “women procreate, men fight”, creating a “stereotype of masculinity” being about “winning fights, physical or metaphorical”.
This theory completely ignores male singers who don’t seem to be into this stereotype. Some women loved Michael Jackson, and he doesn’t seem to be the “fighting type”, rather the feelings one.
This theory has blind spots, and is asserted quite forcefully: it has a mechanism of one being scared that they’re “poisoning their market value” if they do something out of line, seen in “chad/incel” memes for example.
Saying this as a person from Russia who saw the rise of fascism in our country, how our culture war went from the internet to the battlefield. I believed in this. I have seen this to be false. Saying this as a person who is responsible for sexual assault and who tries to heal and be better. “Red pill” is b.s. see my posts on Mastodon to see more on this.
It’s an extreme theory that ignores important corner cases (queer people), and tends to make people resentful towards anything not fitting in the theory, all while taking away “free will” to replace it with a “simple objective function”, without any research and clear outliers/exceptions, and is linked to male violence. Ironically, turning people into machines, the very thing the real “red pill from the movie” was not really pro: the concept name itself is stolen from a movie by trans authors and basically turned upside down in an evil twisted way: Neo was like “I’m gonna talk to the machines and bring peace to y’all. The war is gonna end”. Red pillers are like “we like guns, force and fighting and don’t like to talk about complex things much” 🤷♀️
“Small government” right wing is not a pejorative. “Traditional cisgender relationship with a man deciding things” is ok too if a women likes it too (and not forcefully taken into that). “Red pill” is, like, way out there for me—it’s a notion that a man can take any woman—nonsense if we consider that some women cheered when Trump was like “grab them and such”, and some women would not like a single violation of consent, like tagging on Twitter. Women are just people. People are different.
“If I loved him any less, I’d make him stay
But he has to be the best , player of games”
She asserts she is aware of the ongoing “push-pull” pickup artistry from him, but refuses to apply it herself to achieve her goal, then says thay the dude is always at work basically
I’d say by the video, using subjective holistic judgement, that he’s legit red-pilled.
And my post above says that red pill is extreme and linked to fascism.
And I say it’s related to so many cases of sexual assault in tech, EA, finance—people see “simple markets” where there’s just so much more complexity, and not much markets necessarily :)
So you don’t have any further reason to think Musk has anything to do with red pill philosophy, but you’re going to cast a bunch of aspersions on him and then randomly insult me at the end.
So I see downvotes as expected. I don’t get it
is it that people don’t want answers?
or maybe they like AI races?
Can people get real on this forum? Like, there are discussions about some ethical theory, infinite ethics or smth. Yet, right now, today, something fishy is going on. How can there be future without the present?
I ask for answers here.
(Note: did not downvote)
Your central question appears interesting and important to me: Has Anthropic joined the arms race for advanced AI? If yes, why?
(And taking a by default conflict-theoretic stance toward new AI startups is perhaps good, based on the evidence one has received via DeepMind/OpenAI).
So, I’d join in in the call for asking e.g. Anthropic (but also other startups like Conjecture, Adept AI and Aligned AI) for their plans to avoid race dynamics, and their current implementation. However, I believe it’s not very likely that especially Anthropic will comment on this.
However, your post is mostly not fleshing out the question, but instead not-quite-attacking-but-also-not-not-attacking Anthropic (“Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil”) and not fully fleshing out the reasons why you’re asking the question (“I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well”), but instead talking a lot about your emotional state a lot. (I don’t think that talking about your emotional state a lot is necessarily bad, but I’d like accusations, questions and statements about emotion to be separated if possible.)
I don’t understand this paragraph, for example. Why do you believe that EA/tech people are better at Machiavellianism than those two? Who exactly is EA/tech people here, that would be good to know.
My emotional state is relevant here. I’m one of the people who was excited about safety. Then I slowly was seeing how the plan is shaky and decisions are controversial (advertise OpenAI jobs, do the “first we get a lot of capabilities skills the do safety” which usually means a capabilities person with an EA t-shirt and not much safety).
My emotional state summarises the history that happened to me. It is relevant to my case: I am showing you how you would feel if you went through my experience, in case if you choose to believe it.
It’s not a “side note”, it’s my evidence I’m showing to say “I have concerns and this feels off, rather a pattern than a one-off case”. Emotions are good for holistic reasoning.
I don’t have the energy to write a “full-fledged EA post with dots over is and all that”. I mean, I feel I’m one of the “plaintiffs” in this case. I believed EA, I trusted all those forum posts. Now I see something is wrong. I am simply asking other people to look into this, say how they feel and think about this.
So we figure out something together.
I feel I need support. No, I don’t want to go to the FB mental health EA support group, because this is not about mental health specifically—it’s about how the field of AI safety is. It’s not that “I feel bad because of a chemical imbalance in my mind”. I feel bad because I see bad things :)
I have written at length on Twitter about my experiences. If you’re still interested, I can link it a bit later.
For a traumatized person it’s painful to go though all this again and again.
Thank you.
I’ll explain my downvote.
I think the thing you’re expressing is fine, and reasonable to be worried about. I think Anthropic should be clear about their strategy. The Google investment does give me pause, and my biggest worry about Anthropic (as with many people, I think) has always been that their strategy could ultimately lead to accelerating capabilities more than alignment.
I just don’t think this post expressed that thing particularly well, or in a way I’d expect or want Anthropic to feel compelled to respond to. My preferred version of this would engage with reasons in favor of Anthropic’s actions, and how recent actions have concretely differed from what they’ve stated in the past.
My understanding of (part of) their strategy has always been that they want to work with the largest models, and sometimes release products with the possibility of profiting off of them (hence the PBC structure rather than a nonprofit). These ideas also sound reasonable (but not bulletproof) to me, so I consequently didn’t see the Google deal as a sudden change of direction or backstab—it’s easily explainable (although possibly concerning) in my preexisting model of what Anthropic’s doing.
So my objection is jumping to a “demand answers” framing, FTX comparisons, and accusations of Machiavellian scheming, rather than an “I’d really like Anthropic to comment on why they think this is good, and I’m worried they’re not adequately considering the downsides” framing. The former, to me, requires significantly more evidence of wrongdoing than I’m aware of or you’ve provided.
I acknowledge and agree with your criticism.
I did question these assumptions (“we do capabilities to increase career capital, and somehow stay in this phase almost forever” and such) since 2020 in the field, talking to people directly. The reactions and disregard I got is the reason I feel the way I feel about all this.
I was thinking “yes, I am probably just not getting it, I will ask politely”. The replies I got were what’s causally preceding me feeling this way.
I am traumatized and I don’t want to engage fully logically here, because I feel pain when I do that. I was writing a lot of logical texts and saying logical things, only to be dismissed kinda, like “you’re not getting it, we are going to the top of this, maybe you need to be more comfortable with power” or something like this.
Needless to say, I have pre-existing trauma about a similar theme from childhood, family etc.
I do not pretend to be an objective EA doing objective things. After all, we don’t have much objective evidence here except for news articles about Anthropic 🤷♀️
So, what I’m doing here is simply expressing how I feel, expressing that I feel a bit powerless about this problem, and asking for help in solving it, inquiring about it, and making sure something is done.
I can delete my post if there is a better post and the community thinks my post is not helpful.
I want to start a discussion, but all I have is a traumatized mind tired of talking about it, which tried every possible measure I could think of.
I leave it up to you, the community, people here to decide—post a new post, ignore it, keep this one and the new one, or only the new one, or write Anthropic people directly, or go to the news, or ask them on Twitter, or anything you can think of—I do not have the mental capacity to do it.
All I can is to write that I feel bad about it, that I’m tired, that I don’t feel my CS skills would be used for good if I joined AIS research today, that I’m disillusioned, and that I ask the community, people who feel the same, to do something if they want to.
I do not claim factual accuracy or rationality metrics. Just raw experience, for you to serve as a starting point in your own actions about this, if you are interested.
My mind now can do talks about feelings, so I talk about feelings. I think feelings are good way to express what I want to say. So I went with this.
That is all. That is all I do here. Thank you.
I downvoted this post because it felt rambling and not very coherent (no offence). You can fix it though :-).
I would also be in favour in having more information on their plan.
The EA Corner Discord might be a better location to post things like that are very raw and unfiltered. I often post things to a more casual location first, then post an improved version either here or on Less Wrong. For example, I often use Facebook or Twitter for this purpose.
It is rambling and incoherent. See why here: https://forum.effectivealtruism.org/posts/bmfR73qjHQnACQaFC/call-to-demand-answers-from-anthropic-about-joining-the-ai?commentId=EaBHtEpJCEv4HnQky
It’s a part of what I’m talking about here
There will be no more editing. I have done quite a lot in this direction (not on the EA forum). I have experience in political movements—when one does so much but the community is still not “getting it”, the solution is for the community to figure things out for itself. Maybe after all I am wrong?
This isn’t a school assignment. Your grade on my post is meaningless.
What does make sense is how you feel about the problem itself and what you will do.
I mean I don’t even understand how you feel. It’s just vague amounts of upsetness and trauma and a want for Anthropic to respond? I think people just don’t share your feelings and find your feelings incongruent with how they view the empirical facts. Like even in this thread you can’t decide between wanting releasing of models and “public participation”. Then you also say these models cause current day harms (Claude isn’t released yet?). While also citing people whose ethics are just open sourcing and releasing it (e.g. Huggingface’s Dall-E Mini didn’t even have a pornography blocker for the first few days).
I think you say you want a discussion about Anthropic (this has been done quite a lot on the forum) but then you give no way to do so. Then anytime the discussion disagrees with you, you retreat back to justifying the post by saying it’s “trauma” and “your grade on my post doesn’t matter”.
This comment is the reason why I started this and the result of my post. I see it as a success.
So, can we have a larger discussion about this?
I am only one person. I did this post.
To do a bigger discussion, there needs to be more people.
I see you care about this.
2+2=...
I would not like to discuss things with you given how your previous actions I don’t think that would be fruitful for anyone involved.
Don’t discuss it with me! Discuss it with the community! :) I’m not an EA!!!
To be more object-level,
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
Your move.
Or maybe people love Peter Thiel, Musk and red pills here? In this case, I guess, there’s not much to discuss. At least I expected some answers. It’s as if people don’t even bother to explain what is being done—just assumed to be “correct”?
In addition, we are issuing a warning to sergia, for this and other comments. Sergia, please read the EA Forum norms post and, if you’re in doubt of whether your comment is meeting those norms, please wait for a while and revise your comment.
This subthread seems to be going in a bad direction. I would encourage those wanting to discuss the net-value of Elon Musk and Peter Thiel on the world to do so elsewhere.
Well, I feel the “red pill” part is directly relevant to alignment, both for current and long-term issues, the values that go into the AI part, and the power structure of the AI company that does it part.
I guess that’s why I included it into my post, don’t really know, I did it with mostly emotion and emotion is not well-interpretable always (sometimes for the best).
I do feel we (EA , tech, finance and related) need to discuss this as a community, the “red pill” stuff and whether it’s extreme (my experience n=1 and my interpretation of m=~100 other people says that yes, it’s a poorly and vaguely phrased partial theory that mostly explains how traumatic, unhappy, unhealthy relationships work (traumatized people are ones who will be most responsive to the “push-pull” pickup artistry, not because “this is how people are” but because “this is how traumatized people try to be happy and fail”), giving a phenomenological explanation with a completely wrong and actively harmful explanation of the underlying causes, with links to fascism and dehumanisation, agressiveness and fatalism)
Personally, I feel in a lot of cases this ideology is the reason people are unsuccessful in relationships: it is a fake cure for a problem that was probably “just” trauma and misunderstanding in the first place. Like, a society-wide misunderstanding between genders. Again, my personal view.
See my other comments about how “a society which is not aligned within itself is unlikely to be able to align other entities well”. Something as massive as this I believe should be addressed first before anything external can be taken care of
Same reason I feel the discussion “apple&android vs Nokia&fxtec” in another thread is very very very directly relevant to alignment, again, both power structure-wise and values themselves-wise.
Don’t really know to best do such a discussion, again, I’m only one person, I don’t really know :)
I am tired. I want a vacation from all this.
I have hope in the community that they are smart and capable and can sort these things through.
I understand that downvotes can be hurtful – but afaik the post has been up for 45min, so maybe it would be a good idea to wait a bit before reading too much into the reaction/non-reaction?
personally I love Thiel & Musk and think they’ve been massive net positives for the world!
Strong agree with Musk (undecided on Thiel), and it frustrates me so much that people on this forum casually dismiss him. I would go so far as to say I think he’s been a much bigger net positive than much if not all of the EA movement—massively improving our prospects from climate change, and reducing existential risk by moving us towards being multiplanetary as fast as possible.
The standard counterarguments seem to be ‘bunkers > planets’, ‘AI makes being multiplanetary irrelevant’, and ‘climate change isn’t a big deal so Tesla doesn’t matter’. I think all three of these arguments are a) probably wrong and more importantly b) almost completely unargued for.
I’m unclear who I feel has the burden of proof on such issues. In some sense burden of proof is a silly concept here, but in another I feel like it’s very important. When 80k et al regularly talk people out of becoming engineers to go into AI safety research or similar, a view which is then often picked up by the wider community, it seems very important that those same EAs should put serious thought into counterfactuals .
well clearly Musk is much better than all the EAs, he built these massive multi-billion-dollar companies and created loads of value on the way! We’re going back to space with Elon! How cool is that? If you disagree, well, ok, I guess that’s a very bold take considering the stock market’s opinion....
re EVs, agree as well, even if you don’t believe the climate stuff (I do w/ some caveats) then Teslas are very beautiful, great cars and almost certainly good for the world on other dimensions (i.e less local pollution in urban areas etc)
How do you feel about the “red pill” they seem to embrace (Musk openly and Thiel by evidence)? Do you feel this worldview affects their actions? Do you think it is extreme? Which political affiliation does “red pill” seem to belong to—left or right? Do you believe in those “sexual markets” stuff? Thank you for your replies.
I would have upvoted but for the red pill paragraph, which seemed needlessly uncharitable to Thiel and Musk. Your comment here seems more like it’s spoiling for a fight than looking for a discussion.
IIRC Musk once tweeted ‘take the red pill’ with no context, a phrase which traditionally referred to any instance of people having a radical perspective shift. When asked, he said he didn’t know about the pick up artistry subgroup of the same name. I see no reason to disbelieve this, and I haven’t heard him say anything particularly in line with their views elsewhere.
The red pill philosophy is broadly associated with—though strictly unrelated to—right wing politics. What does that have to do with anything? Plenty of EAs are right wing. It’s not a pejorative.
To sum up my other comment, yes, I want to confront you with normalizing red pill. I think it’s fascist and dehumanising.
Yes, I also think it’s relevant to AI alignment, because a community that is not aligned itself, that is “at war” between it’s own genders (tech people), is unlikely to align something else well.
Saying this as a person from a fascist country who kinda supports an ex-fascist politician trying to do better and be kinder (see Navalny)
Saying this as a sexual abuser and mentally abused.
Saying this as one who apologized and saw that what I did was wrong. And one who now sees how stupid and unnecessary it was.
Saying this as one who talked to pro-Putin people a lot to understand how this all works.
There are ways to have both emotion and logic at peace and harmony. Together. Not at war.
Red pill ain’t it.
It’s extreme, agressive, ugly, stolen, perverted, dead.
Which “right wing” do you mean? I think it was about “small government” (but not “zero government”).
How is “red pill” related to “small government”? :)
You’re using the other “right wing”, which is something related to traditional family. That is one step there—a patriarch in the family. “Red pill” is asserting that it has enough explanatory power to overwhelm the aspect of free will in decisions of women and men, that the “sexual market” is a more clear explanation for how relationships go.
I’d say it’s a bit of an extreme step, because it claims a single simple objective for the whole of humanity: “women procreate, men fight”, creating a “stereotype of masculinity” being about “winning fights, physical or metaphorical”.
This theory completely ignores male singers who don’t seem to be into this stereotype. Some women loved Michael Jackson, and he doesn’t seem to be the “fighting type”, rather the feelings one.
This theory has blind spots, and is asserted quite forcefully: it has a mechanism of one being scared that they’re “poisoning their market value” if they do something out of line, seen in “chad/incel” memes for example.
Saying this as a person from Russia who saw the rise of fascism in our country, how our culture war went from the internet to the battlefield. I believed in this. I have seen this to be false. Saying this as a person who is responsible for sexual assault and who tries to heal and be better. “Red pill” is b.s. see my posts on Mastodon to see more on this.
It’s an extreme theory that ignores important corner cases (queer people), and tends to make people resentful towards anything not fitting in the theory, all while taking away “free will” to replace it with a “simple objective function”, without any research and clear outliers/exceptions, and is linked to male violence. Ironically, turning people into machines, the very thing the real “red pill from the movie” was not really pro: the concept name itself is stolen from a movie by trans authors and basically turned upside down in an evil twisted way: Neo was like “I’m gonna talk to the machines and bring peace to y’all. The war is gonna end”. Red pillers are like “we like guns, force and fighting and don’t like to talk about complex things much” 🤷♀️
“Small government” right wing is not a pejorative. “Traditional cisgender relationship with a man deciding things” is ok too if a women likes it too (and not forcefully taken into that). “Red pill” is, like, way out there for me—it’s a notion that a man can take any woman—nonsense if we consider that some women cheered when Trump was like “grab them and such”, and some women would not like a single violation of consent, like tagging on Twitter. Women are just people. People are different.
Musk’s ex-wife made a song about him.
https://m.youtube.com/watch?v=ADHFwabVJec
“If I loved him any less, I’d make him stay But he has to be the best , player of games”
She asserts she is aware of the ongoing “push-pull” pickup artistry from him, but refuses to apply it herself to achieve her goal, then says thay the dude is always at work basically
I’d say by the video, using subjective holistic judgement, that he’s legit red-pilled.
And my post above says that red pill is extreme and linked to fascism.
And I say it’s related to so many cases of sexual assault in tech, EA, finance—people see “simple markets” where there’s just so much more complexity, and not much markets necessarily :)
Bite me :)
So you don’t have any further reason to think Musk has anything to do with red pill philosophy, but you’re going to cast a bunch of aspersions on him and then randomly insult me at the end.
Bye.
I’m not insulting you. I’m challenging your belief.
And, where is the insult? Which line?.. I’m saying that the red pill ideology is fascist. How does it insult you? Well, unless...
And yes, I think that if his multiple wifes all say kinda the same thing, it’s legit evidence. Yes.
And yes, I believe this is relevant for alignment. Directly. A community of red pillers creates an AI. Where would it go and what would it do?..