I’m not saying stalling the films is something the movement should do at all, merely an option albeit risky… I was sharing that as an option for people who may be featured (and may not be aware they have life rights). The film will get attention no matter what. The flipside is, the film comes out and what, EA says or does nothing? If that’s what everyone thinks is best so be it, I really wasn’t trying to prescribe a course of action, merely suggesting some options (because don’t you hate it when people point out a (potential) problem and not provide some ideas for a solution?)
barkbellowroar
I understand EA developed from/was influenced by several groups, but at some point it needed to articulate it’s own identity and it seems people still tie us to the rationality movement. This is not a slam against them at all, but correct me if i’m wrong, we have different purpose/aims than that movement, right? So it might be beneficial to clarify the difference between the two groups.
I appreciate that and it’s not just you, it’s the immediate downvoting and pushback in the comments—all of which give the subtext of “you’re wrong to think this way” without any effort to engage/ask questions. If you’re too busy to engage fully, then maybe don’t comment/vote.
I thought your referencing scientology was a tad ironic; the amount of behaviors EAs have done over the last 10 years are what have created the cult narrative around us. It’s the immediate defensiveness against your own kind, someone who identifies with the movement, instead of working to see their perspective or explain your perspective in a kinder tone that’s problematic.
meta-commentary for anyone in community building, on why people may opt out of engaging...
I’m not trying to stoke flames here with my post, and one reason I don’t engage much on the forum anymore is the strength of pushback I get for asking what I considered to be an earnest simple commentary/question.
A common vibe is that the movement seems to have a strong defense to any thing remotely perceived as a criticism. Per a below comment, grandstanding on not making an assumption about the movies is implying that I’m wrong to be worried… but I’ve been watching the development of these projects since they were announced back in 2023 (along with other AI projects out of personal interest in film/entertainment).
I’m not brashly lashing out at these films after having just learned about them five minutes ago.… I’ve been waiting for it to be brought up and I haven’t seen any comment across various channels (forums, twitter, blogs etc) so I figured I’d raise a hand and ask “is anyone else worried about this?”
For all the talk about learning from FTX and foresight, it seems odd that this hasn’t been mentioned. If I’m wrong to ask those questions, let me know and I’ll happily find an exit. After 10 years in this movement, every time I try to earnestly engage (in the only way I can right now, via the forum) I’m immediately met with what feels like premature suspicion/defensive rejection.
I’m all for debating ideas, but I guess some things should only be brought up in person conversations or not at all. For an autistic person it’s very hard to discern how to engage with EA because these social rules/norms of interacting are non-obvious.
I’m starting to realize we may be talking past each other because of different vantage points in the movement. Sam ‘won’ the openai board coup by getting reinstated as CEO. He also ‘won’ in the sense of positive optics given how much silicon valley rushed to defend/support him during that situation. Your comment suggests we’re reading different press coverage, but specifically during that event, the press for Sam was positive and for the EA movement negative. Even now, I wouldn’t say press for him has been ‘overwhelmingly’ negative. Yes, the tides are shifting toward people being weary of AI, but imo that shift isn’t happening fast enough. There is some positive press/support for ai safety work, but I don’t feel like it’s enough by comparison.
Yes, sorry that’s confusing i’ll switch to last names; Luca’s artificial is with altman.
Thanks for the comment Igor! I agree, letting a narrative form is considered bad in the corporate branding/PR space because people love to create stories around something for it to have meaning to them.
I definitely think this is an issue which is why I was flagging it—I haven’t seen any mention of these film/tv projects being discussed in EA and they’ve been in development for over a year. I decided to just put it in the waters given the movie just wrapped filming… maybe people are aware of it but communication from leaders has been light this past few years leaving some of us in the dark. I’d rather people have a heads so they aren’t blindsided like with FTX/OpenAI.
From a Hollywood ‘we sell drama as entertainment’ lens—in a conflict someone has to be the ‘hero’ (they are right) the audience roots for and someone as the ‘antagonist’ (they are wrong) - who do you think that’s going to be? To the victor goes the spoils… ultimately Sam ‘won’ so do you think the representation will be in EA’s favor?
We can guess too from the press coverage from that time which was skewed largely pro-sam, anti-ea language, even from non-tech sources (finance, politics, entertainment). From the outside looking in, most people do not understand what EA is and therefore misconstrue our work because it is deliberately simplified in the press (and mixed in with the MIRI/Bay rationality crowd) to “they think terminator will kill us all”—which to the public at large (who, yes, is ignorant about the dangers of technology) but this sounds juvenile/fear-mongering/cultish.
(Which is why improving our communication of the issues to a lay audience is so critical and seems to be undervalued or perhaps bottlenecked by talent/resources).
That, plus a general negative shift following SBF/FTX still lingering on people’s minds…
I’m curious as to why you just assume it will be a positive representation of EAs?
I think the tone of my piece may have portrayed a stronger sentiment than I intended—I’ll consider editing to improve that, but in no way am I saying we should ban the films outright (although stalling long enough to find out what the representation will be could be valuable).
Given we don’t know the full intent of the films, the portrayal could be not just critical but malicious (even if merely farcical) - and I’m simply pointing that out now, well before the release date. So that we, across the movement, can be prepared for that negative perception hit and as you say begin working on “creating a positive counter-narrative” …whatever that may be.
I agree outright shutting down critiques is problematic ‘cult-like’ behavior—but EA was already doing that for years prior to FTX which contributed to people’s views that this movement was a cult. The problem I see, having done that prematurely (because of defensiveness, contrarianism, edge-lording), is that there is now a legitimate reason to deflect a potential new wave of negative press and the leaders have ‘evolved’ to saying “optics don’t matter!” The evolution of thinking amongst higher brass is out of sync with the reality of our situation.
If you disagree, could you please give an inkling of a reason, even just a word salad list of search terms that I can decipher is better than blind downvoting (note: this system is kind of terrible for learning and makes me uncomfortable to post earnest observations and ideas).
EA needs organic optics, not ‘no’ optics
deleted my original comment about the first DDS attack because it was called a ‘crackpot theory and shouldn’t be on the forum’. I didn’t phrase it right but was asking for any research from potential catastrophic risk groups on probability/estimates of attacks like this increasing (especially with global heat around AGI race) and any recommendations for regular citizens to prepare for it.
Second attack hit this morning in under a week and it’s now picking up in press as potential threat. So i’m going to trust my gut on this one and say i’m not wrong in forecasting this is an immediate emerging threat. I’m going to start compiling some work on this, let me know if you’re interested.
I’m still digesting this but I’m in shock—for the first time in almost 10 years drifting in and out of EA, I feel seen, heard, validated (from a LLM no less). Every little itch or frustration with EA, that has made partaking in it all these years to be challenging, has been captured and articulated by the egregore analysis. Recognition of these issues, that have been growing and festering long before SBF happened (and may have allowed SBF to happen), is the first step to solving them… now the question is, does EA ‘want’ to fix these?
I’m relieved to see someone bring up the coup in all of this—I think there is a lot of focus on this post about what Thiel believes or is “thinking ” (which makes sense for a community founded on philosophy) versus what Thiel is “doing” (which is more entrepreneurship/silicon valley approach). We can dig into the ‘what led him down this path’ later imo but the more important objective is that he’s rich, powerful and making moves. Stopping or slowing those moves is the first step at this point… I definitely think the 2027 hype is not about reaching AGI but about groups vying for control and OpenAI has been making not so subtle moves toward that positioning…
[redacted]
Yes, Thiel has returned to supporting Trump more fully since the election. Many presume it’s because of lucrative contracts with Palantir conducting surveillance and data collection for US government. Best summary I could find on short notice.
This website is amazing! Not sure how long you’ve been a product manager (given you’re only 25) but it’s beautifully done. I think this quality of design is what EA has been missing a little in effectively communicating it’s key concepts and methods to the world. I would encourage you to reach out to any design groups or connect with larger organizations. I don’t have any direct contacts but groups like User-friendly or And-Now (design firms), CEA (who runs the primary effectivealtruism.org informational site) and professional groups at EA Globals would be a good start.
I’d be doing less good with my life if I hadn’t heard of effective altruism
I’m willing to bet there is a correlation between people voting positively with those directly connected to the community be it work, conferences or living in a hub city.
Yes, sorry I meant the university. Thanks for including the charts, I always love to see the data!
That’s great Eevee! Forethought has shifted to focusing on AGI (so yes, I’m sure they need support, but it isn’t doing work like GPI, just to be clear). There is a list at the bottom of the page of other organizations but none of them work directly on the philosophical approach to cause and resource prioritization—which is what made GPI so valuable to the movement. My best recommendation would be to broadly support groups like Givewell, Giving What We Can and the Life You Can Save since they do some level of research (really all groups do). You could also follow specific researchers work and reach out directly to see if they need support.
Good News Friday: Actor Jesse Eisenberg (Social Network) announced he’s donating his kidney to a stranger today :)