I started eating dairy again (after 15 years of veganism) as part of a moral trade. Then, when the trade ended, I chose to continue eating dairy because of how much flexibility it had given me back. I can eat at the airport. There wasn’t a constant food scarcity program running itself in the back of my mind taking a much bigger toll on my mental health than I had realized since I was a teenager. As much as this usually sounds like an excuse, I honestly could not conclude that it was better for the world for me to go back to that for the amount of suffering it prevents.
Holly Elmore ⏸️ 🔸
This topic is painful for me because I opened my heart to EA as fellow travelers who could share my moral burden. And I feel betrayed by my former community when they let themselves be charmed and intimidated by the AI industry. And then turn all their rhetorical tricks against me not to listen to me (“You’re not being nice and/or smart enough, so it’s actually you who has the problem.”).
I realize now the rationalists were always cowards who wanted the Singularity and to imagine colonizing Galaxies more than to protect people. But I can’t quite accept EA’s failure here. I’m like a spurned lover who can’t believe the love wasn’t real. I thought, at its core, EA really got it about protecting beings.
It should hurt to work with AI companies. You should be looking for excuses not to even if you think there’s an important reason to maintain the relationship. But instead it’s always the other way around because working with corrupt tech elites who “joke” about being evil is ego-syntonic to you. That’s an extremely serious problem.
There’s a serious courage problem in Effective Altruism. I am so profoundly disappointed in this community. It’s not for having a different theory of change—it’s for the fear I see in people’s eyes when considering going against AI companies or losing “legitimacy” by not being associated with them. The squeamishness I see when considering talking about AI danger in a way people can understand, and the fear of losing face within the inner circle. A lot of you value being part of a tech elite more than you do what happens to the world. Full stop. And it does bother me that you have this mutual pact to think of yourselves as good for your corrupt relationships with the industry that’s most likely to get us all killed.
I agree with all your suggestions and don’t see them in contrast with the post.
I’m not trying to say reality will never be lumpy, but I am claiming that we can’t make use of that without a contingent of the overall AI Safety movement being prepared to take a grind-y strategy. Sometimes it’ll be pure grind and sometimes it’ll have more momentum behind it. But if you have no groundwork laid when something big happens, you can’t just jump in and expect people to interpret it as supporting your account.
Did you feel treated ungently for your warning shots take? Or is this just on the behalf of people who might?
Also can you tell me what you mean by “ontologically ungentle”? It sounds worryingly close to a demand that the writer think all the readers are good. I do want to confront people with the fact they’ve been lazily hoping for violence if that’s in fact what they’ve been doing.
I thought I was giving the strong version. I have never heard an account of a warning shot theory of change that wasn’t “AI will cause a small-scale disaster and then the political will to do something will materialize”. I think the strong version would be my version, educating people first so they can understand small-scale disasters that may occur for what they are. I have never seen or heard this advocated in AI Safety circles before.
And I described how impactful chatGPT was on me, which imo was a warning shot gone right in my case.
What is the “strong” version of warning shots thinking?
I’m just actually really curious what they disagree with!
I’m so curious why the initial spate of disagree-reactors disagreed with the post. It still has more disagrees than agrees. What’s crux?
So good to see something go right. Great work!
I call this pollen strategy and I very much believe in it
High praise!
But EA is not cause agnostic OR cause neutral atm
Agree with your read of the situation, and I wish that the solution could be for EA to actually be cause neutral… but if that’s not on offer then I agree the intro material should be more upfront about that.
How do people think these consensuses are achieved? Acting like the issue matters and combatting the obfuscation of industry are a big part of that.
Is it for the same reason CAIP appears to have gone bankrupt? That a “major funder” (read: Open Phil) pulled support and that triggered a cascade of funders pulling out?
EDIT: This is my unconfirmed understanding of the situation.
The myth of AI “warning shots” as cavalry
Please :)
Totally. I gave up some kinds of influence and the highest point on the moral high ground by not being totally vegan but I’ve gained another kind of influence from people who were ready for reducetarianism.