Hi, I went to Lessonline after registering for EAG London, my impression of both events being held on the same weekend is something like:
Events around the weekend (Manifest being held the weekend after Lessonline) informed Lessonline’s dates (but why not the weekend after Manifest then?)
People don’t travel internationally as much for EAGs (someone cited to me ~10% of attendees but my opinion on reflection is that this seems an underestimate).
I imagine EAG Bay Area, Global Catastrophic Risks in early Feb also somewhat covered the motivation for “AI Safety/EA conference”.
I think you’re right that it’s not entirely* a coincidence that Lessonline conflicted with EAG Bay Area, but I’m thinking this was done somewhat more casually and probably reasonably.
I think it’s odd, and other’s have noted too, the most significant AI safety conference shares space with things unrelated on an object-level. I think it’s further odd to consider, I’ve heard people say, why bother going to a conference like this when I live in the same city as the people I’d most want to talk with (Berkeley/SF).
Finally, I feel weird about AI, since I think insiders are only becoming more convinced/confirmed of extreme event likelihoods (AI capabilities). I think it has only become more important by virtue of most people updating timelines earlier, not later, and this includes Open Phil’s version of this (Ajeya and Joe Carlsmith’s AI timelines). In fact, I’ve heard arguments that it’s actually less important by virtue of, “the cat’s out of the bag and not even Open Phil can influence trajectories here.” Maybe AI safety feels less neglected because it’s being advocated from large labs, but that may be both a result of EA/EA-adjacent efforts and not really enough to solve a unilateralizing problem.
Hi, I went to Lessonline after registering for EAG London, my impression of both events being held on the same weekend is something like:
Events around the weekend (Manifest being held the weekend after Lessonline) informed Lessonline’s dates (but why not the weekend after Manifest then?)
People don’t travel internationally as much for EAGs (someone cited to me ~10% of attendees but my opinion on reflection is that this seems an underestimate).
I imagine EAG Bay Area, Global Catastrophic Risks in early Feb also somewhat covered the motivation for “AI Safety/EA conference”.
I think you’re right that it’s not entirely* a coincidence that Lessonline conflicted with EAG Bay Area, but I’m thinking this was done somewhat more casually and probably reasonably.
I think it’s odd, and other’s have noted too, the most significant AI safety conference shares space with things unrelated on an object-level. I think it’s further odd to consider, I’ve heard people say, why bother going to a conference like this when I live in the same city as the people I’d most want to talk with (Berkeley/SF).
Finally, I feel weird about AI, since I think insiders are only becoming more convinced/confirmed of extreme event likelihoods (AI capabilities). I think it has only become more important by virtue of most people updating timelines earlier, not later, and this includes Open Phil’s version of this (Ajeya and Joe Carlsmith’s AI timelines). In fact, I’ve heard arguments that it’s actually less important by virtue of, “the cat’s out of the bag and not even Open Phil can influence trajectories here.” Maybe AI safety feels less neglected because it’s being advocated from large labs, but that may be both a result of EA/EA-adjacent efforts and not really enough to solve a unilateralizing problem.
MATS is happening one week after Manifest.