Going to merge replies into this one comment, rather than sending lots and flooding the forum. If I’ve @ you specifically and you don’t want to respond in the chain, feel free to DM:
On neglectedness—Yep, fair point that our relevant metric here is neglectedness in the world, not in EA. I think there is a point to make here but it was probably the wrong phrasing to use, I should have made it more about ‘AI Safety being too large a part of EA’ than ‘Lack of neglectedness in EA implies lower ITN returns overall’
On selection bias/other takes—These were only ever meant to be my takes and reflections, so I definitely think they’re only a very small part of the story. I guess @Stefan_Schubert would be interested to hear about your impression of ‘lack of leadership’ and any potential reasons why?
On the Bay/Insiders—It does seem like the Bay is convinced AI is the only game in town? (Aschenbrenner’s recent blog seems to validate this). @Phib would be interested to hear you say more on your last paragraph, I don’t think I entirely grok it but it sounds very interesting.
On the Object Level—I think this one for an upcoming sequence. Suffice to say that one can infer from my top level post that I have very different beliefs on this issue than many ‘insider EAs’, and I do work on AI/ML for my day job![1] But I think that while David sketches out a case for overall points, I think those points have been highly underargued and underscrutinised given their application in shaping the EA movement and its funding. So look it for a more specific sequence on the object level[2] maybe-soon-depending-on-writing-speed.
Yeah, thank you, I guess I was trying to say that the evidence only seems to be stronger over time that the Bay Area’s: ‘AI is the only game in town’, is accurate.
Insofar as, timelines for various AI capabilities have outperformed both superforecasters’ and AI insiders’ predictions; transformative AI timelines (at Open Phil, prediction markets, AI experts I think) have decreased significantly over the past few years; the performance of LLMs have increased at an extraordinary rate across benchmarks; and we expect the next decade to extrapolate this scaling to some extent (w/ essentially hundreds of billions if not tens of trillions to be invested).
Although, yeah, I think to some extent we can’t know if this continues to scale as prettily as we’d expect and it’s especially hard to predict categorically new futures like exponential growth (10%, 50%, etc, growth/year). Given the forecasting efforts and trends thus far it feels like there’s a decent chance of these wild futures, and people are kinda updating all the way? Maybe not Open Phil entirely (to the point that EA isn’t just AIS), since they are hedging their altruistic bets, in the face of some possibility this decade could be ‘the precipice’ or one of the most important ever.
Misuse and AI risk seem like the negative valence of AI’s transformational potential. I personally buy the arguments around transformational technologies needing more reasoned steering and safety, and I also buy that EA has probably been a positive influence, and that alignment research has been at least somewhat tractable. Finally I think that there’s more that could be done to safely navigate this transition.
Also, re David (Thorstad?) yeah I haven’t engaged with his stuff as I probably should, and I really don’t know how to reason for or against arguments around the singularity, exponential growth, and the potential of AI without deferring to people more knowledgeable/smarter than me. I do feel like I have seen the start and middle of trends they predicted, and predict will extrapolate-with my own personal use and some early reports on productivity increases.
I do look forward to your sequence and hope you do really well on it!
Going to merge replies into this one comment, rather than sending lots and flooding the forum. If I’ve @ you specifically and you don’t want to respond in the chain, feel free to DM:
On neglectedness—Yep, fair point that our relevant metric here is neglectedness in the world, not in EA. I think there is a point to make here but it was probably the wrong phrasing to use, I should have made it more about ‘AI Safety being too large a part of EA’ than ‘Lack of neglectedness in EA implies lower ITN returns overall’
On selection bias/other takes—These were only ever meant to be my takes and reflections, so I definitely think they’re only a very small part of the story. I guess @Stefan_Schubert would be interested to hear about your impression of ‘lack of leadership’ and any potential reasons why?
On the Bay/Insiders—It does seem like the Bay is convinced AI is the only game in town? (Aschenbrenner’s recent blog seems to validate this). @Phib would be interested to hear you say more on your last paragraph, I don’t think I entirely grok it but it sounds very interesting.
On the Object Level—I think this one for an upcoming sequence. Suffice to say that one can infer from my top level post that I have very different beliefs on this issue than many ‘insider EAs’, and I do work on AI/ML for my day job![1] But I think that while David sketches out a case for overall points, I think those points have been highly underargued and underscrutinised given their application in shaping the EA movement and its funding. So look it for a more specific sequence on the object level[2] maybe-soon-depending-on-writing-speed.
Which I have recently left to do some AI research and see if it’s the right fit for me.
Currently tentatively titled “Against the overwhelming importance of AI x-risk reduction”
Yeah, thank you, I guess I was trying to say that the evidence only seems to be stronger over time that the Bay Area’s: ‘AI is the only game in town’, is accurate.
Insofar as, timelines for various AI capabilities have outperformed both superforecasters’ and AI insiders’ predictions; transformative AI timelines (at Open Phil, prediction markets, AI experts I think) have decreased significantly over the past few years; the performance of LLMs have increased at an extraordinary rate across benchmarks; and we expect the next decade to extrapolate this scaling to some extent (w/ essentially hundreds of billions if not tens of trillions to be invested).
Although, yeah, I think to some extent we can’t know if this continues to scale as prettily as we’d expect and it’s especially hard to predict categorically new futures like exponential growth (10%, 50%, etc, growth/year). Given the forecasting efforts and trends thus far it feels like there’s a decent chance of these wild futures, and people are kinda updating all the way? Maybe not Open Phil entirely (to the point that EA isn’t just AIS), since they are hedging their altruistic bets, in the face of some possibility this decade could be ‘the precipice’ or one of the most important ever.
Misuse and AI risk seem like the negative valence of AI’s transformational potential. I personally buy the arguments around transformational technologies needing more reasoned steering and safety, and I also buy that EA has probably been a positive influence, and that alignment research has been at least somewhat tractable. Finally I think that there’s more that could be done to safely navigate this transition.
Also, re David (Thorstad?) yeah I haven’t engaged with his stuff as I probably should, and I really don’t know how to reason for or against arguments around the singularity, exponential growth, and the potential of AI without deferring to people more knowledgeable/smarter than me. I do feel like I have seen the start and middle of trends they predicted, and predict will extrapolate-with my own personal use and some early reports on productivity increases.
I do look forward to your sequence and hope you do really well on it!