“The most likely explanation for a weird new idea not being popular is that it’s wrong. ”
I agree with much of the rest of the comment, but this seems wrong—it seems more likely that these things just aren’t very correlated.
“The most likely explanation for a weird new idea not being popular is that it’s wrong. ”
I agree with much of the rest of the comment, but this seems wrong—it seems more likely that these things just aren’t very correlated.
Just noting that these are possibly much stronger claims than “AGI will be able to completely disempower humanity” (depending on how hard it is to solve cold fusion a-posteriori).
This is not a fair critique of the post, he’s responding to a hypothetical discussed on Twitter.
At the risk of sounding, it’s really not clear to me that anything “went wrong”—from my outside perspective, it’s not like there was a clear mess-up on the part of EA’s anywhere here, just a difficult situation managed to the best of people’s abilities.
That doesn’t mean that it’s not worth pondering whether there’s any aspect that had been handled badly, or more broadly what one can take away from this situation (although we should beware to over-update on single notable events). But, not knowing the counterfactuals, and absent a clear picture of what things “going right” would have looked like, it’s not evident that this should be chalked up as a failing on the part of EA.
From gwern’s summary over on lesswrong it sounds like the actual report only stated that the firing was “not mandated”, which could be interpreted as “not justified” or “not required”. Is it clear from the legal context that the former is implied?
It certainly does seem to push capabilities, although one could argue about whether the extent of it is very significant or not.
Being confused and skeptical about their adherence to their stated philosophy seems justified here, and it is up to them to explain their reasoning behind this decision.
On the margin, this should probably update us towards believing they don’t take their stated policy of not advancing the SOTA too seriously.
You don’t need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it’s disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one—and one shouldn’t go too far with it in view of general discourse norms. That said, given Altman’s exceptional capability for unilateral action due to his position, it’s reasonable to be at least concerned about it.
I realize that my question sounded rethorical, but I’m actually interested in your sources or reasons for your impression. I certainly don’t have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven’t encountered the position you’re concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don’t get the impression that the AI CEO’s are seen as big safety proponents.
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn’t even consider Altman a thought leader in AI—his extraordinary skill seems mostly social and organizational. There’s maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
Noted! The key point I was trying to make is that I’d think it helpful for the discourse to separate 1) how one would act in a frame and 2) why one thinks each one is more or less likely (which is more contentious and easily gets a bit political). Since your post aims at the former, and the latter has been discussed at more length elsewhere, it would make sense to further de-emphasize the latter.
May I ask what your feelings on a pause were beforehand?
I like your proposed third frame as a somewhat hopeful vision for the future. Instead of pointing out why you think the other frames are poor, I think it would be helpful to maintain a more neutral approach and elaborate which assumptions each frame makes and give a link to your discussion about these in a sidenote.
I’m just noting that you are assuming that we have many robustly aligned AI’s, in which case I agree that take-over seems less likely.
Absent this assumption, I don’t think that “AIs will form a natural, unified coalition” is the necessary outcome, but it seems reasonable that the other outcomes will look functionally the same for us.
Again, this is just one salient example, but: Do you find it unrealistic that a top human level persuasion skills (think interchangeably Mao, Sam Altman and FDR depending on the audience) together with 1 million times ordinary communication bandwidth (i.e. entertaining this amount of conversations) would enable you to take over the world? Or would you argue that AI is never going to get to that level?
I agree that this would be interesting to explore, but heavily disagree that having a detailed answer to that influences the prediction of X risk substantially.
That’s fair enough and levels of Background understanding vary (I don’t have a relevant PhD either), but then the criticism should be about this point being easily misunderstood rather than making a big deal about the strawman position being factually wrong. In which case it would also be much more constructive than adversarial criticism.
It does look like there is an interpretation of EYs basic claims which is roughly reasonable and one which is clearly false and unreasonable, and you assumed he meant the clearly unreasonable thing and attack that. I think absent further evidence, it’s fair for others to say “he couldn’t have possibly meant that” and move on.
Just judging from his Twitter feed, I got the weak impression D’Angelo is somewhat enthusiastic about AI and didn’t catch any concerns about existential safety.
I agree that the first sentence of your original comment is an interesting observation and that there might be an interesting thought here in how this situation interacted with gender dynamics.
I don’t like the rest of your comment though, since it seems to reduce the role of the female board members to their gender and is a bit suggestive in a way that doesn’t actually seem helpful to understand the situation.
There’s a broader point here about the takeover of a non-profit organization by financial interests that I’d really like to see fought back against.