This has been brought up a few times before. obviously EA isn’t a monolith, but i personally might like the idea of making Sam Altman a “villain” perhaps better than denouncing AI in general. But either would do. I would love for some EA orgs (not just individuals) and even meta orgs to take a step like this. Yes it would be a risk, but I think it could have huge benefits reassuring the public and EA doubters post FTX, in in addition to the obvious AI safety stuff. Many still associate EA with OpenAI which is sad.
interestingly generally this sentiment seems to get met with a little more disagreement than agreement in previous discussions.
I agree with @Saul Munn though that it could be helpful to spell out exactly who and how you think some kind of a denouncement could happen.
Thinking along similar lines as you of EA orgs making stances. And like the idea that Sam Altman or specific actors might be easier to make “targets”.
But yeah, I was mainly curious to get a sense of why I haven’t seen more action from EA on OpenAI if people agree (with me) that they are so bad for AI safety. Maybe people just don’t think OpenAI is bad.
This has been brought up a few times before. obviously EA isn’t a monolith, but i personally might like the idea of making Sam Altman a “villain” perhaps better than denouncing AI in general. But either would do. I would love for some EA orgs (not just individuals) and even meta orgs to take a step like this. Yes it would be a risk, but I think it could have huge benefits reassuring the public and EA doubters post FTX, in in addition to the obvious AI safety stuff. Many still associate EA with OpenAI which is sad.
interestingly generally this sentiment seems to get met with a little more disagreement than agreement in previous discussions.
I agree with @Saul Munn though that it could be helpful to spell out exactly who and how you think some kind of a denouncement could happen.
Thinking along similar lines as you of EA orgs making stances. And like the idea that Sam Altman or specific actors might be easier to make “targets”.
But yeah, I was mainly curious to get a sense of why I haven’t seen more action from EA on OpenAI if people agree (with me) that they are so bad for AI safety. Maybe people just don’t think OpenAI is bad.