I think to switch my position on crux 2 using only timeline arguments, you’d have to argue something like <10% chance of transformative AI in 50 years.
That makes sense. “Plausibly soonish” is pretty vague so I pattern matched to something more similar to—by default it will come within a few decades.
It’s reasonable that for people with different comparative advantages, their threshold for caring should be higher. If there were only a 2% chance of transformative AI in 50 years, and I was in charge of effective altruism resource allocation, I would still want some people (perhaps 20-30) to be looking into it.
That makes sense. “Plausibly soonish” is pretty vague so I pattern matched to something more similar to—by default it will come within a few decades.
It’s reasonable that for people with different comparative advantages, their threshold for caring should be higher. If there were only a 2% chance of transformative AI in 50 years, and I was in charge of effective altruism resource allocation, I would still want some people (perhaps 20-30) to be looking into it.