I encourage people in EA who care a lot about AI safety/​AI alignment to engage more with the object-level technical discussions relevant to forecasting AGI (such as my most recent post). My impression is that the majority of people in EA who believe near-term AGI is likely haven’t spent much time discussing high-quality counterarguments. I have this impression because when I keep bringing up those counterarguments, that’s what I keep finding out.
I encourage people in EA who care a lot about AI safety/​AI alignment to engage more with the object-level technical discussions relevant to forecasting AGI (such as my most recent post). My impression is that the majority of people in EA who believe near-term AGI is likely haven’t spent much time discussing high-quality counterarguments. I have this impression because when I keep bringing up those counterarguments, that’s what I keep finding out.