Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
And because microaggression and internalised racism (MIR) may come across as “culture war” loaded terms (despite them also being academic terms)
You seem to be assuming that just because something is an academic term that it isn’t culture war loaded, despite the fact that some of these fields don’t actually see objectivity as having any value.
(I actually upvoted this post because it is very well written and I appreciate you taking all of this time to define a key term).
I’m not a fan of this negativity. Why not be grateful for all the money he’s donated to the Bill and Melinda Gates Foundation instead?
Fascinating, I can’t believe I’ve never heard this argument before.
What do you think was the best point that Titotal made?
I’m not saying it can’t be questioned. And there wasn’t a rule that you couldn’t discuss it as part of the AI welfare week. That said, what’s wrong with taking a week’s break from the usual discussions that we have here to focus on something else? To take the discussion in new directions? A week is not that long.
I think it’s very valuable for you to state what the proposition would mean in concrete terms.
On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.
AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.
If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.
I’m pretty bullish on having these kinds of debates. While EA is doing well at having an impact in the world, the forum has started to feel intellectually stagnant in some ways. And I guess I feel that these debates provide a way to move the community forward intellectually. That’s something I’ve been feeling has been missing for a while.
You wrote that governance is more important than technical research. Have you considered technical work that supports governance? The AI Safety Fundamentals course has a week on this.
In any case, working in AI or AI safety would increase your credibility for any activism that you decide to engage in.
Exciting news! I don’t know whether we should prioritise Digital Consciousness, but I think it’s important for there to be de-confusion work happening in this space.
Feels like maybe a broader discussion about how much EA should focus on long-termism vs near-termist interventions.
Definitely not worth spending a whole week debating vs. someone just writing a post if they feel strongly that this hasn’t been sufficiently discussed.
I think we could give that a go, but it might make sense to have a vote after three months about whether it was too much.
Upvoted for making your prediction. Disagree vote because I think it’s wrong.
Even if we expect AI progress to be “super fast”, it won’t always be “super fast”. Sometimes it’ll be “extra, super fast” and sometimes it’ll merely be “very fast”.
I think that some people are over-updating on AI progress now only being “very fast” thinking that it this can only happen within the model where AI is about to cap out, whilst I don’t think this is the case at all.
I wonder if it would be worthwhile for a bunch of AI Safety societies at elite universities to make some kind of public commitment about something in this vein. This probably has more weight/influence than 80,000 Hours, however, it would be more valuable if we were trying to influence them, but it’s less valuable since we probably don’t have any plausibly satisfiable asks so long as Sam is there.
Sorry, the question asks about the counterfactual value of different professions, but it doesn’t say what you’re comparing a post-doc to?
I agree that there’s a lot of advantage of occasionally bringing a critical mass of attention to certain topics where this moves the community’s understanding forward vs. just hoping we end up naturally having the most important conversations.
I agree that having a more experienced founder could quite possibly make a difference.
Beyond that, I wonder whether it would make sense for people to consider interventions that are further upstream, ie. some kind of fellowship course for people interested in going into policy in this kind of area.
How is this fellowship broader?
I think we should use talk invitations to nudge people towards acting in good faith.
I would be sad to see Emile Torres offered a speaking slot at an EA conference as this would reward bad faith criticism. I wouldn’t join a social pressure campaign to cancel him—sometimes people will make decisions I consider unwise and I’ll make decisions that they consider unwise—but I would caution someone considering doing this that they were making an unwise decision by inviting someone who often acts in bad faith and I would strongly recommend that they consider alternate names before resorting to Emile (I don’t think it would be hard to find equally interesting critics without the bad faith, his mind just immediately springs to mind due to availability bias).
It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.
That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.
Nonetheless, this seems like a useful concept for thinking about what the future might look like.