Thanks for this comment!
I think your arguments about your own motivated reasoning are somewhat moot, since they seem more of an explanation that your behavior/public facing communication isn’t straightout deception (which seems right!). As I see it, motivated reasoning is to a large extent about deceiving yourself and maintaining a coherent self-narrative, so it’s perfectly plausible that one is willing to pay substantial cost in order to maintain this. (Speaking generally; I’m not very interested in discussing whether you’re doing it in particular.)
Nick K.
I think this misses the point: The financial gain comes from being central to ideas around AI in itself. I think given this baseline, being on the doomer side tends to carry huge opportunity cost financially.
At the very least it’s unclear and I think you should make a strong argument to claim anyone financially profits from being a doomer.
One should stick to the original point that raised the question about salary.
Is $600K a lot of money for most people and does EY hurt his cause by accepting this much? (Perhaps, but not the original issue)
Does EY earning $600K mean he’s benefitting substantially from maintaining his position on AI safety? E.g. if he was more pro AI development, would this hurt him financially? (Very unlikely IMO, and that was the context Thomas was responding to)
You could imagine a Yudkowsky endorsement (say with the narrative that Zuck talked to him and admits he went about it all wrong and is finally taking the issue seriously just to entertain the counterfactual...) to raise meta AI from “nobody serious wants to work there and they can only get talent by paying exorbitant prices” to “they finally have access to serious talent and can get a critical mass of people to do serious work”. This’d arguably be more valuable than whatever they’re doing now.
I think your answer to the question of how much an endorsement would be worth mostly depends on some specific intuitions that I imagine Kulveit has for good reasons but most people don’t, so it’s a bit hard to argue about it. It also doesn’t help that in every other case than Anthropic and maybe deepmind it’d also require some weird hypotheticals to even entertain the possibility.
If you ask the AIs they get numbers in the tens of millions to tens of billions range, with around 1 billion being the central estimate. (I haven’t extensively controlled for the effect and some calculations appear driven by narrative)
Personally I find it hard to judge and tend to lean no when trying to think it through, but it’s not obviously nonsense.
This doesn’t seem to be a reasonable way to operationalize. It would create much less value for the company if it was clear that they were being paid for endorsing them. And I highly doubt Amodei would be in a position to admit that they’d want such an endorsement even if it indeed benefitted them.
I was only mentioning Karpathy as someone reasonable who repeatedly points out the lack of online learning and seems to have (somewhat) longer timelines because of that. This is solely based on my general impression. I agree the stated probabilities seem wildly overconfident.
I agree that that comment may be going too far with claiming “bad faith”, but the article does have a pretty tedious undertone of having found some crazy gotcha that everyone is ignoring. (I’d agree that it gets at a crux and that some reasonable people, e.g. Karpathy, would align more with the OP here)
What have they done or are planning to do that seems worth supporting?
There’s a broader point here about the takeover of a non-profit organization by financial interests that I’d really like to see fought back against.
“The most likely explanation for a weird new idea not being popular is that it’s wrong. ”
I agree with much of the rest of the comment, but this seems wrong—it seems more likely that these things just aren’t very correlated.
Just noting that these are possibly much stronger claims than “AGI will be able to completely disempower humanity” (depending on how hard it is to solve cold fusion a-posteriori).
This is not a fair critique of the post, he’s responding to a hypothetical discussed on Twitter.
At the risk of sounding, it’s really not clear to me that anything “went wrong”—from my outside perspective, it’s not like there was a clear mess-up on the part of EA’s anywhere here, just a difficult situation managed to the best of people’s abilities.
That doesn’t mean that it’s not worth pondering whether there’s any aspect that had been handled badly, or more broadly what one can take away from this situation (although we should beware to over-update on single notable events). But, not knowing the counterfactuals, and absent a clear picture of what things “going right” would have looked like, it’s not evident that this should be chalked up as a failing on the part of EA.
From gwern’s summary over on lesswrong it sounds like the actual report only stated that the firing was “not mandated”, which could be interpreted as “not justified” or “not required”. Is it clear from the legal context that the former is implied?
It certainly does seem to push capabilities, although one could argue about whether the extent of it is very significant or not.
Being confused and skeptical about their adherence to their stated philosophy seems justified here, and it is up to them to explain their reasoning behind this decision.
On the margin, this should probably update us towards believing they don’t take their stated policy of not advancing the SOTA too seriously.
You don’t need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it’s disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one—and one shouldn’t go too far with it in view of general discourse norms. That said, given Altman’s exceptional capability for unilateral action due to his position, it’s reasonable to be at least concerned about it.
I realize that my question sounded rethorical, but I’m actually interested in your sources or reasons for your impression. I certainly don’t have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven’t encountered the position you’re concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don’t get the impression that the AI CEO’s are seen as big safety proponents.
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn’t even consider Altman a thought leader in AI—his extraordinary skill seems mostly social and organizational. There’s maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
Where is this claim being made? I think the suggestion was that someone found it desirable to reduce the financial incentive gradient for EY taking any particular public stance, not some vastly general statement like what you’re suggesting.