Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:
“The term “longtermism” is misleading because in practice “longtermism” means “concern over short AI timelines”, and in fact many “longtermists” are concerned with events on a much shorter time scale than the rest of EA.”
I thought that was a surprising and interesting argument, though I don’t recall who initially made it. Does anyone remember?
This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI’s long lasting influence into the far future.
Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:
“The term “longtermism” is misleading because in practice “longtermism” means “concern over short AI timelines”, and in fact many “longtermists” are concerned with events on a much shorter time scale than the rest of EA.”
I thought that was a surprising and interesting argument, though I don’t recall who initially made it. Does anyone remember?
This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI’s long lasting influence into the far future.