I don’t know if demanding answers makes sense, but I do think it’s a pretty hard call whether Anthropic is net positive or net negative for AI safety; I’m surprised at the degree to which some people seem to think this question is obvious; I’m annoyed at the EA memeplex for not making this uncertainty more transparent to newcomers/outsiders; I hope not too many people join Anthropic for bad reasons.
I mean in at least in global health and animal welfare, most of the time we don’t evaluate charities for being net-negative, we only look at “other people’s charities” that are already above a certain bar. I would be opposed to spending considerable resources looking at net negative charities in normal domains, most of your time is much better spent trying to triage resources to send to great projects and away from mediocre ones.
In longtermism or x-risk or meta, everything is really confusing so looking at net-positive vs net-negative becomes more compelling.
For what it’s worth, it’s very common at LTFF and other grantmakers to consider whether grants are net negative.
Also to be clear, you don’t consider OpenAI to be EA-adjacent right? Because I feel like there are many discussions about OpenAI’s sign over the years.
I feel I have failed right here. I want somehow EA people talking to each other and finally deciding something together. Not to me.
I don’t really know. I’m not the one to ask :)
What is “EA-adjacent”? Well, we can come up with a some phrase for a definition. Then see how some corner cases don’t fit into that, extend the definition, repeat it a few times.
It would work for some phases of EA (like when there were only bed nets) but not for the future, it will need to be updated.
This seems to be mostly what people do here—dividing the world into concrete blocks with some structure on top.
That doesn’t answer any of the concerns, it’s so far away—creating some taxonomy of what’s EA and what’s not in EA...
What was the issue? That some people at Anthropic stopped informing us what’s going on. That the industry is kinda confused what to do, burned out, and some (with me) say radicalised into some “male warriors going bravely and gloriously into Valhalla full speed”. That there are so many issues with AI today (how to talk to the public? How to get help with this? How to stop current harm? What about regulation? Etc etc etc) that it seems that people tend to just ignore it all and focus on the shrimp and infinite ethics. I feel this lethargy and apathy too. Let’s not go there, this is has only one possible ending.
Let’s evaulate THAT.
It doesn’t matter how we define it.
Does the culture of OpenAI and EA intersect? Yes. A lot.
Are they causally linked? Yes. A lot.
Is Anthropic causally linked to all this as well? Yes. A lot.
Is something wrong over there? Yes. Definitely looks like it to me.
That’s all that matters. Since we’re (apparently) people who are supposed to do something about it. Let’s do it. Let’s finally do a debate about whether “ignoring issues today is acceptable”. Let’s discuss “what do we want Anthropic and maybe OpenAI to do”, let’s discuss “how can we get outside people to help”. Let’s finally discuss “whether red-pilled stuff is ok”
All of this that was ignored for decades apparently.
Can we please not put it under the rug?
About the discussion—ethicists are going to TV programs and it’s going pretty well. No “normie don’t understand” no, none of it. Working quite ok so far.
No need for “write your post in a format that I can parse with my RationalityParser9000. Syntax error on line 1, undefined entity ‘emotion’. Error. Loading shrimp welfare...” 💔
C’mon. Nothing to be afraid of. You really don’t need a tranny from Russia to lead you into a discussion about some next shit that’s about to blow in Silicon Valley . I’m pretty sure you can do it :)
Don’t ask me, I’m an immigrant here. The “minor inconvenience”, “a mere remainder, mere ripples” in someone’s utopia, an artifact in a render, a glitch, a fluke, a “disappointment to EA leaders seeing me”. I don’t know.
Ah apologies, my mistake I didn’t know, possibly wrong of me to assume this was in bad faith, and I definitely don’t want to tell trans people how to refer to themselves.
tbc I don’t know any more than you here, and I only have the text of the comment to go off of. I just interpreted “You really don’t need a [blip] from Russia to lead you into a discussion about some next shit that’s about to blow in Silicon Valley . I’m pretty sure you can do it :)
Don’t ask me, I’m an immigrant here.” as referring to themselves. I found the rest of the comment kind of hard to understand so it’s definitely possible I also misunderstood things here.
Yes, it’s about me, I’m a trans girl from Russia. Yes I’m saying that it would be weird to me if I do something with the EA community.
People here believe it’s ok to believe in “red pill” (not the one from the movie, the other one, see in the most downvoted subthread here). I don’t want this in my life. It doesn’t feel ok to me to believe in that.
People here believe in utilitarianism (see comments of Sabs, he’s not alone in this), which usually makes people like me the “mere ripples”.
It would just feel weird: a peasant helping the master to deal with some issue together?
The world is not ready for it.
I’d love to be proved wrong though.
I have experience that it’s like this: I say something, polite, not polite, anything, related to this set of issues—I get downvoted or asked to “rephrase it in some way”.
What I really want is answers.
Like, the RX/TX balance of this conversation is: I sent a lot of stuff to EAs, got not much neaniful response.
I don’t know if demanding answers makes sense, but I do think it’s a pretty hard call whether Anthropic is net positive or net negative for AI safety; I’m surprised at the degree to which some people seem to think this question is obvious; I’m annoyed at the EA memeplex for not making this uncertainty more transparent to newcomers/outsiders; I hope not too many people join Anthropic for bad reasons.
I’m looking at this discourse since 2018, including when I was in EA and doing AI safety.
At no point I saw a discussion whether a big EA-adjacent org is net-positive or net-negative.
It’s some sort of a “blind spot”: we evaluate other people’s charities. But ours are, of course, pretty good.
I feel it’s time to have a discussion about this, that would be awesome.
I mean in at least in global health and animal welfare, most of the time we don’t evaluate charities for being net-negative, we only look at “other people’s charities” that are already above a certain bar. I would be opposed to spending considerable resources looking at net negative charities in normal domains, most of your time is much better spent trying to triage resources to send to great projects and away from mediocre ones.
In longtermism or x-risk or meta, everything is really confusing so looking at net-positive vs net-negative becomes more compelling.
For what it’s worth, it’s very common at LTFF and other grantmakers to consider whether grants are net negative.
Also to be clear, you don’t consider OpenAI to be EA-adjacent right? Because I feel like there are many discussions about OpenAI’s sign over the years.
I feel I have failed right here. I want somehow EA people talking to each other and finally deciding something together. Not to me.
I don’t really know. I’m not the one to ask :)
What is “EA-adjacent”? Well, we can come up with a some phrase for a definition. Then see how some corner cases don’t fit into that, extend the definition, repeat it a few times.
It would work for some phases of EA (like when there were only bed nets) but not for the future, it will need to be updated.
This seems to be mostly what people do here—dividing the world into concrete blocks with some structure on top.
That doesn’t answer any of the concerns, it’s so far away—creating some taxonomy of what’s EA and what’s not in EA...
What was the issue? That some people at Anthropic stopped informing us what’s going on. That the industry is kinda confused what to do, burned out, and some (with me) say radicalised into some “male warriors going bravely and gloriously into Valhalla full speed”. That there are so many issues with AI today (how to talk to the public? How to get help with this? How to stop current harm? What about regulation? Etc etc etc) that it seems that people tend to just ignore it all and focus on the shrimp and infinite ethics. I feel this lethargy and apathy too. Let’s not go there, this is has only one possible ending.
Let’s evaulate THAT.
It doesn’t matter how we define it.
Does the culture of OpenAI and EA intersect? Yes. A lot. Are they causally linked? Yes. A lot. Is Anthropic causally linked to all this as well? Yes. A lot.
Is something wrong over there? Yes. Definitely looks like it to me.
That’s all that matters. Since we’re (apparently) people who are supposed to do something about it. Let’s do it. Let’s finally do a debate about whether “ignoring issues today is acceptable”. Let’s discuss “what do we want Anthropic and maybe OpenAI to do”, let’s discuss “how can we get outside people to help”. Let’s finally discuss “whether red-pilled stuff is ok”
All of this that was ignored for decades apparently.
Can we please not put it under the rug?
About the discussion—ethicists are going to TV programs and it’s going pretty well. No “normie don’t understand” no, none of it. Working quite ok so far.
No need for “write your post in a format that I can parse with my RationalityParser9000. Syntax error on line 1, undefined entity ‘emotion’. Error. Loading shrimp welfare...” 💔
C’mon. Nothing to be afraid of. You really don’t need a tranny from Russia to lead you into a discussion about some next shit that’s about to blow in Silicon Valley . I’m pretty sure you can do it :)
Don’t ask me, I’m an immigrant here. The “minor inconvenience”, “a mere remainder, mere ripples” in someone’s utopia, an artifact in a render, a glitch, a fluke, a “disappointment to EA leaders seeing me”. I don’t know.
Ask other EAs:)
Can you please take this comment down or edit it given you have inexplicably used a slur (not that there ever is a good context)
Feels kinda mean to tell a non-native speaker off for using a slur about their own group.
Ah apologies, my mistake I didn’t know, possibly wrong of me to assume this was in bad faith, and I definitely don’t want to tell trans people how to refer to themselves.
tbc I don’t know any more than you here, and I only have the text of the comment to go off of. I just interpreted “You really don’t need a [blip] from Russia to lead you into a discussion about some next shit that’s about to blow in Silicon Valley . I’m pretty sure you can do it :)
Don’t ask me, I’m an immigrant here.” as referring to themselves. I found the rest of the comment kind of hard to understand so it’s definitely possible I also misunderstood things here.
Yes, it’s about me, I’m a trans girl from Russia. Yes I’m saying that it would be weird to me if I do something with the EA community.
People here believe it’s ok to believe in “red pill” (not the one from the movie, the other one, see in the most downvoted subthread here). I don’t want this in my life. It doesn’t feel ok to me to believe in that.
People here believe in utilitarianism (see comments of Sabs, he’s not alone in this), which usually makes people like me the “mere ripples”.
It would just feel weird: a peasant helping the master to deal with some issue together?
The world is not ready for it.
I’d love to be proved wrong though.
I have experience that it’s like this: I say something, polite, not polite, anything, related to this set of issues—I get downvoted or asked to “rephrase it in some way”.
What I really want is answers.
Like, the RX/TX balance of this conversation is: I sent a lot of stuff to EAs, got not much neaniful response.
So I stop.
Nah it maybe seems like I was wrong. If so, apologies OP!