I don’t really know.
But that’s a good point: Chesterton’s fence is a pretty good heuristic.
Probably some people were being a bit pushy advertising their services?
I don’t really know.
But that’s a good point: Chesterton’s fence is a pretty good heuristic.
Probably some people were being a bit pushy advertising their services?
The framing of your question suggests EA’s role is to prescribe actions
Was I presuming this? I didn’t think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I’m not claiming this is impossible, just that it’s tricky.
I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis
I’m curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there’s all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don’t have high confidence that they could predict which ones these would be and even if you could it’d take a massive amount of time and people’s lives are pretty busy and what would they do with that knowledge anyway?
That’s orthogonal to the point that I raised about it being hard to run a course that simultaneously manages to be a strong fit for different groups of people.
The last EAG I attended had rules restricting handing out materials.
Having just finished watching this Dwarkesh video which explained how big a deal pamphlets were when they were first invented, I’d actually go the other way and encourage it instead.
Here’s my reasoning: Talks have been de-emphasised in favour of one-on-ones at EAGs. There’s a lot to like about one-on-ones, but one disadvantage is that we’ve removed a key avenue for ideas to gain a critical mass and enter the water supply. Pamphlets could fill this gap. After all, if you see a good pamphlet, it’d be quite natural for it to come out during a conversation and for you to pull it out.
Additionally, when you have dozens of one-on-ones, things often blur together. Now, you can be disciplined and keep notes, but that’s hard and often I find my phone is short of battery. If people handed out pamphlets containing their proposals or takes, then it’d be easier to review them afterwards; conversations would be much more likely to have effects that last. Two further benefits: it might be more efficient to exchange pamphlets at the start of a one-on-one and producing a pamphlet would convince people to figure figure out how to communicate their ideas clearly.
Have you thought about the possibility that EA may have resonated in a particular social context that no longer exists?
But a community that took twenty years to develop its particular structure of norms and mutual knowledge cannot be regrown in twenty years, because the conditions that shaped it no longer exist. The people are older, the context has changed, and the specific convergance of circumstances that brought those particular individuals together in that particular configuration at that particular time is gone. Communities are path-dependent in the strongest possible sense: their current state is a function of their entire history, and you can’t rerun the history.
The main challenge I see at the moment is that for half the potential audience AI is clearly the biggest thing going on at the moment and the other half sees it as clearly overhyped. And it’s quite hard to construct a program or run events that will really hit it out of the park for both sides at once.
I would be keen to hear if you think you have any solutions to this birifuction.
Teaching people skills takes time and different jobs require different skills. I haven’t done this program either, but I expect that inspiring people to pursue a career and improving their clarity around what this might look like is both more general and can be achieved more quickly.
A lot of these claims are subtly different from the ones I made (not claiming that you were necessarily asserting that I agreed with them).
Engage in the same behaviour if given power is factually untrue
I wouldn’t endorse this statement either. Left and right fascism express themselves differently. So I definitely wouldn’t predict the ‘same behaviour’.
Anti-fascists are a wide coalition consisting of a wide array of political views
There is a wide coalition against facism, but they don’t call themselves antifa. It’s a much narrower group that adopts that label.
I do not think that if the right loses the next election, that the left would be equally fascist
I don’t expect that either. But they may still ‘lock-in’ some of the backsliding which would become the new standard from which behaviour is measured, enabling continued escalation from there.
The claim I made was “‘anti-facist activists’ are often just as fascist as anyone on the right” and I believe that’s true. The impact of an election depends on the choices of a much broader set of people.
The current adminisatration flooded mineapolis with poorly trained thugs who made it unsafe to go outside as a non-white person. I do not believe that a President AOC or whoever will take actions of equivalent damage.
The damage that an action causes in the long-term has relatively little correlation with the damage that an action causes in the short-term. I’m not claiming ‘equivalently damaging short-term effects’.
I saw that this comment was downvoted before. I think this is a mistake: many people will have similar questions. Indeed, I saw multiple indications in the post that the author likely defines ‘fascism’ than many people in EA.
Stronger: I think it’s reasonable to wonder whether the author’s is somewhat ‘fuzzy’, even though River’s phrasing was a bit too direct for my taste.
Thanks for posting this, it made me think.
Here are my thoughts:
• Authoritarianism is a real risk. I think this has been clear for a while, but I’ve updated upwards multiple times.
• I agree that it’s possible to analyse the issue of fascism in a non-partisan way. Unfortunately, most ‘anti-fascist’ work focuses on only one side of the political spectrum. I think this is a mistake: ‘anti-facist activists’ are often just as fascist as anyone on the right and it’s quite plausible that if the right loses the next election, then instead of aiming to restore frayed norms and institutions, folks on the left decide that the only option is to fight fire with fire. This is a threat in and of itself, but it would also increase the ability of the right to lean more in this direction if they win power again.
• The mutual aid suggestion comes of as really strange to me. The argument for mutual aid as a way of building the EA community feels much stronger than the argument of engaging in mutual aid as a way to fight fascism. This is especially true if you believe fascism is an urgent threat here and now rather than a possibility that we need to prepare for in case it happens at some distant, undefined point in the future.
• “Mass deportion” really feels like a distinct question from facism—it’s not really fascism if the government is just enforcing standard immigration laws and there are proper procedural safeguards; on the other hand, even small scale deportations can be legitimately linked to fascism if they’re being leveraged cynically to chill speech. The raw numbers aren’t the active ingredient or determining factor.
I thought some of their analysis was weak. I made comments at the time, but unfortunately, I don’t have time at the moment to go back and find them.
I’m surprised that there hasn’t been an attempt (as far as I know) to fund/create a competitor to Epoch.ai.
It wouldn’t have to compete on all benchmarks, but it would be good to have a forecasting organisation that could be trusted with potentially dual use insights into capabilities trajectories. I don’t believe this would require uniformity of views: it would just require people with a proper sense of responsibility.
I also think that the bad judgement displayed by some of their employees impinges on some of their research (emphasis on some, particularly the more subjective elements, Epoch is still my go-to-source in many cases). Unfortunately, I think there’s a difference between being intelligent and being wise and one common way that this distinction plays out is that some quite intelligent folks follow the incentive gradient towards being excessively and reflexively contrarian. Just to be clear, I’m not trying to attack their research, just noting that whilst a second opinion would always have been valuable, the fact that I trust them less on the margin, makes the need for such a second opinion feel more pressing to me.
In terms of producing high-quality research, I’d orient to how Epoch has done many things well, but also made a few mistakes that I would controversially call clear mistakes.
I’m also pretty sure that there’s sufficient talent in the space now to create a second such effort. It could also start small and funders could help it scale if it proves itself.
Thanks for sharing.
I assume you’ve read Tyler Alterman’s excellent but long essay: https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends
How do you views compare to him?
“However, AI timelines have led me to conclude that everything I had previously planned on doing over the course of the coming months or years, must now be completed as soon as possible, ideally by the end of the weekend.”
Really? That feels like excessive haste.
We seem to be seeing some kind of vibe shift when it comes to AI.
What is less clear is whether this is a major vibe shift or a minor one.
If it’s a major one, then we don’t want to waste this opportunity (it wasn’t clear immediately after the release of ChatGPT that it really was a limited window of opportunity and if we’d known, maybe we would have been able to leverage it better).
In any case, we should try not to waste this opportunity, if happens to turn out to be a major vibe shift.
There’s also Founder’s Pledge.
Sure, but these orgs found their own niche.
HIP and Successif focuse more on mid-career professionals.
Probably Good focusing on a broader set of cause areas; and taking some of the old responsibilities of 80k when it started focusing on more on transformative AI.
Oh, I think AI safety is very important; short-term AI safety too though not quite 2027 😂.
Knock-off MATS could produce a good amount of value, I just want the EA hotel to be even more ambitious.
Should our EA residential program prioritize structured programming or open-ended residencies?
There’s more information value in exploring structured programming.
That said, I’d be wary duplicating existing programs; ie. if the AI Safety Fellowship became a knock-off MATS.
What the School of Moral Ambition has achieved is impressive, but it’s unclear whether EA should aim for mainstream appeal insofar as SoMA could potentially fill that niche.
”~70% male and ~75% white” — I’m increasingly feel that the way to be cool is to not be so self-conscious about this kind of stuff. Would it be great to have more women on our team? Of course! And for EA to be more global? Again, that’d be great! But talking about your demographics like it’s a failure will never be cool. Instead EA should just back itself. Are our demographics ideal? No. But if circumstances are such that we need to get the job done with these demographics, then we’ll get the job done with these demographics. And honestly, the less you need people, the more likely they are to feel drawn to you, at least in my experience.
“Please, for God’s sake, hire non-EA creative talent” — I suspect this is very circumstantial. There are circumstances where you’ll be able to delegate to non-EA creative talent and it’ll work fine, but there will be other circumstances where you try this and you find that they just keep distorting the message. It’s harder than you might think.
I agree re: 4 though. The expectations re: caveats depend heavily on the context of the post.
An analogy: let’s suppose you’re trying to stop a tank. You can’t just place a line of 6 kids in front of it and call it “defense in depth”.
Also, it would be somewhat weird to call it “defense in depth” if most of the protection came from a few layers.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn’t feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.