This post can be summarized as “Aschenbrenner’s narrative is highly questionable”. Of course it is. From my perspective, having thought deeply about each of the issues he’s addressing, his claims are also highly plausible. To “just discard” this argument because it’s “questionable” would be very foolish. It would be like driving with your eyes closed once the traffic gets confusing.
This is the harshest response I’ve ever written. To the author, I apologize. To the EA community: we will not help the world if we fall back on vibes-based thinking and calling things we don’t like “questionable” to dismiss them. We must engage at the object level. While the future is hard to predict, it is quite possible that it will be very unlike the past, but in understandable ways. We will have plenty of problems with the rest of the world doing its standard vibes-based thinking and policy-making. The EA community needs to do better.
There is much to question and debate in Aschenbrenner’s post, but it must be engaged with at the object level. I will do that, elsewhere.
On the vibes/ad-hominem level, note that Aschenbrenner also recently wrote that Nobody’s on the ball on AGI alignment. He appears to believe (there and elsewhere) that AGI is a deadly risk, and we might very well all die from it. He might be out to make a quick billion, but he’s also serious about the risks involved.
The author’s object-level claim is that they don’t think AGI is immanent. Why? How sure are you? How about we take some action or at least think about the possibility, just in case you might be wrong and the many people close to its development might be right?
It seems to me that you are missing my point. I’m not trying to dismiss or debunk Aschenbrenner. My point is to call out that what he is doing is harmful to everyone, including those who believe AGI is imminent.
If you believe that AGI is coming soon, then shouldn’t you try to convince other people of this? If so, shouldn’t you be worried that people like Aschenbrenner ruin that by presenting themselves like conspiracy theorists?
We must engage at the object level. [...] We will have plenty of problems with the rest of the world doing its standard vibes-based thinking and policy-making. The EA community needs to do better.
Yes! That is why what Aschenbrenner is doing is so harmful, he is using an emotional or narrative argument instead of a real object-level argument. Like you say, we need to do better.
The author’s object-level claim is that they don’t think AGI is immanent. Why? How sure are you? How about we take some action or at least think about the possibility [...]
I have read the technical claims made by Aschenbrenner and many other AI optimists, and I’m not convinced. There is no evidence for any kind of general intelligence abilities surfacing in any of the current AI systems. People have been trying to do that for decades, and for the part couple of years, but there has been almost no progress on that front at all (in-context learning is one of the biggest ones I can think, and it can hardly even be called learning). While I do think that some action can be taken, what Aschenbrenner suggests is, as I iterate in my text, too much given our current evidence. Extraordinary claims require extraordinary evidence, as it is said.
This post can be summarized as “Aschenbrenner’s narrative is highly questionable”. Of course it is. From my perspective, having thought deeply about each of the issues he’s addressing, his claims are also highly plausible. To “just discard” this argument because it’s “questionable” would be very foolish. It would be like driving with your eyes closed once the traffic gets confusing.
This is the harshest response I’ve ever written. To the author, I apologize. To the EA community: we will not help the world if we fall back on vibes-based thinking and calling things we don’t like “questionable” to dismiss them. We must engage at the object level. While the future is hard to predict, it is quite possible that it will be very unlike the past, but in understandable ways. We will have plenty of problems with the rest of the world doing its standard vibes-based thinking and policy-making. The EA community needs to do better.
There is much to question and debate in Aschenbrenner’s post, but it must be engaged with at the object level. I will do that, elsewhere.
On the vibes/ad-hominem level, note that Aschenbrenner also recently wrote that Nobody’s on the ball on AGI alignment. He appears to believe (there and elsewhere) that AGI is a deadly risk, and we might very well all die from it. He might be out to make a quick billion, but he’s also serious about the risks involved.
The author’s object-level claim is that they don’t think AGI is immanent. Why? How sure are you? How about we take some action or at least think about the possibility, just in case you might be wrong and the many people close to its development might be right?
It seems to me that you are missing my point. I’m not trying to dismiss or debunk Aschenbrenner. My point is to call out that what he is doing is harmful to everyone, including those who believe AGI is imminent.
If you believe that AGI is coming soon, then shouldn’t you try to convince other people of this? If so, shouldn’t you be worried that people like Aschenbrenner ruin that by presenting themselves like conspiracy theorists?
Yes! That is why what Aschenbrenner is doing is so harmful, he is using an emotional or narrative argument instead of a real object-level argument. Like you say, we need to do better.
I have read the technical claims made by Aschenbrenner and many other AI optimists, and I’m not convinced. There is no evidence for any kind of general intelligence abilities surfacing in any of the current AI systems. People have been trying to do that for decades, and for the part couple of years, but there has been almost no progress on that front at all (in-context learning is one of the biggest ones I can think, and it can hardly even be called learning). While I do think that some action can be taken, what Aschenbrenner suggests is, as I iterate in my text, too much given our current evidence. Extraordinary claims require extraordinary evidence, as it is said.