I downvoted this post because it felt rambling and not very coherent (no offence). You can fix it though :-).
I would also be in favour in having more information on their plan.
The EA Corner Discord might be a better location to post things like that are very raw and unfiltered. I often post things to a more casual location first, then post an improved version either here or on Less Wrong. For example, I often use Facebook or Twitter for this purpose.
There will be no more editing. I have done quite a lot in this direction (not on the EA forum). I have experience in political movements—when one does so much but the community is still not “getting it”, the solution is for the community to figure things out for itself. Maybe after all I am wrong?
This isn’t a school assignment. Your grade on my post is meaningless.
What does make sense is how you feel about the problem itself and what you will do.
I mean I don’t even understand how you feel. It’s just vague amounts of upsetness and trauma and a want for Anthropic to respond? I think people just don’t share your feelings and find your feelings incongruent with how they view the empirical facts. Like even in this thread you can’t decide between wanting releasing of models and “public participation”. Then you also say these models cause current day harms (Claude isn’t released yet?). While also citing people whose ethics are just open sourcing and releasing it (e.g. Huggingface’s Dall-E Mini didn’t even have a pornography blocker for the first few days).
I think you say you want a discussion about Anthropic (this has been done quite a lot on the forum) but then you give no way to do so. Then anytime the discussion disagrees with you, you retreat back to justifying the post by saying it’s “trauma” and “your grade on my post doesn’t matter”.
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
I downvoted this post because it felt rambling and not very coherent (no offence). You can fix it though :-).
I would also be in favour in having more information on their plan.
The EA Corner Discord might be a better location to post things like that are very raw and unfiltered. I often post things to a more casual location first, then post an improved version either here or on Less Wrong. For example, I often use Facebook or Twitter for this purpose.
It is rambling and incoherent. See why here: https://forum.effectivealtruism.org/posts/bmfR73qjHQnACQaFC/call-to-demand-answers-from-anthropic-about-joining-the-ai?commentId=EaBHtEpJCEv4HnQky
It’s a part of what I’m talking about here
There will be no more editing. I have done quite a lot in this direction (not on the EA forum). I have experience in political movements—when one does so much but the community is still not “getting it”, the solution is for the community to figure things out for itself. Maybe after all I am wrong?
This isn’t a school assignment. Your grade on my post is meaningless.
What does make sense is how you feel about the problem itself and what you will do.
I mean I don’t even understand how you feel. It’s just vague amounts of upsetness and trauma and a want for Anthropic to respond? I think people just don’t share your feelings and find your feelings incongruent with how they view the empirical facts. Like even in this thread you can’t decide between wanting releasing of models and “public participation”. Then you also say these models cause current day harms (Claude isn’t released yet?). While also citing people whose ethics are just open sourcing and releasing it (e.g. Huggingface’s Dall-E Mini didn’t even have a pornography blocker for the first few days).
I think you say you want a discussion about Anthropic (this has been done quite a lot on the forum) but then you give no way to do so. Then anytime the discussion disagrees with you, you retreat back to justifying the post by saying it’s “trauma” and “your grade on my post doesn’t matter”.
This comment is the reason why I started this and the result of my post. I see it as a success.
So, can we have a larger discussion about this?
I am only one person. I did this post.
To do a bigger discussion, there needs to be more people.
I see you care about this.
2+2=...
I would not like to discuss things with you given how your previous actions I don’t think that would be fruitful for anyone involved.
Don’t discuss it with me! Discuss it with the community! :) I’m not an EA!!!
To be more object-level,
YES I am confused in terms of “releasing models” and “public participation”. Very very much.
I don’t think it’s just me though.
The Google ethics team is confused too: Margaret Mitchell went to do Hugging face and Timnit Gebru went to do public participation.
All of this is tricky, like, there’s a culture war in many countries and somehow in those conditions we need to do a discussion about AI. We can’t not do it: secrets will only make it worse, because of lack of feedback, backlash, and lack of oversight.
Releasing models makes them more easy to inspect but also opens doors to bad actors.
It’s a mess.
It’s more like the whole industry is confused.
What seems reasonable is to slow all this down a bit. It’s likely that a lot of ML people are burned out working so fast and not thinking clearly.
We saw Yudkowsky talking on Twitter and trying to save everyone—that doesn’t seem like things are going particularly well.
As you have seen, I am definitely for slowing things down—all in for that.
How can we do that, so later we can discuss all this mess, at least be in a sane state for that?
To be less cryptic, it’s not really about me. It’s about the community finally discussing these real pressing problems instead of talking about only shrimp and infinite ethics (nothing wrong with that, but not when there’s a big pressing issue with something being off in AIS)
I’m just one person. I hold the positions that “completely no regulation” is not the way, that “too much regulation” is not the way, “talking to public” is the way, “culture war can be healed”, “billionaire funding only is not the way”, “listening and learning is the way”, “Anthropic seems off”, “AIS culture seems off”, “EAs are way too ignorant of everything that’s current or outside EA”, “red pill is widespread in tech and EA and this is not ok”, “let’s discuss it broadly” in general
My experience led me to these beliefs and I have things to show for each of those.
I don’t really know what’s the best way of aligning AI. What is definitely a first step is to at least have some consensus, or at least a concrete map of disagreements on these issues.
So far, the approach of the community is “big people in famous EA entities do it, and we discuss mostly not pressing issues about infinities while they over there make controversial potentially civilization-altering decisions (if one believes ™️), unaccountable and vague on top of an ivory tower”
My post is a way to deal with it and I see it as a success.
I am not your leader. I will not do things you said I should do. I will not “lead” this discussion—it is impossible.
What I can do is inspire people to do it better than me.
Your move.