this is concerning if the bait is cool, old fashioned, volunteering, and the switch is to AI. Read my answer to David’s comment, from my background I interpret AI risk to be a fad, not without its merits, and will be relevant when/if robots self-manufacture and also control all the means of production, but that realistically is at least 2-3 human generations away.
You might be interested in these post series I put together, so far just 3 posts in each series.
The series “Skepticism about near-term AGI” is general and tries to be accessible and interesting to a newcomer to these debates, although there may be some technical and inaccessible parts to some of them.
The post “3 reasons AGI might still be decades away” by Zershaaneh Qureshi on the 80,000 Hours blog is very quick and accessible, and I’d like to add it to the series, but it hasn’t been published on the EA Forum. I recommend that post too.
The other series “Criticism of specific accounts of imminent AGI” is very much inside baseball and might feel unimportant or inaccessible to newcomers to these debates. Each of the 3 posts is responding to something very specific in the AGI debates, and if you don’t know or care about that very specific thing, then you might not care about those posts. I think they are all excellent and necessary pieces of criticism, it’s just we’re really getting into the weeds at that point, so someone who isn’t caught up on the AGI debates might be totally confused. So, I’d recommend the “Skepticism about near-term AGI” series first.
To be clear, I think there is absolutely no intention of doing this. EA existed before AI became hot, and many EAs have expressed concerns about the recent, hard pivot towards AI. It seems in part, maybe mostly (?), to be a result of funding priorities. In fact, a feature of EA that hopefully makes it more immune than many impact focused communities to donor influence (although far from total immunity!) is the value placed on epistemics—decisions and priorities should be argued clearly and transparently, why AI should take priority over other cause areas. Glad to have you engage skeptically on this!
this is concerning if the bait is cool, old fashioned, volunteering, and the switch is to AI. Read my answer to David’s comment, from my background I interpret AI risk to be a fad, not without its merits, and will be relevant when/if robots self-manufacture and also control all the means of production, but that realistically is at least 2-3 human generations away.
A cool read on a related topic, the technosphere
https://theconversation.com/climate-change-weve-created-a-civilisation-hell-bent-on-destroying-itself-im-terrified-writes-earth-scientist-113055
and the original coining of the 2014 term by Peter Haff
https://journals.sagepub.com/doi/10.1177/2053019614530575
You might be interested in these post series I put together, so far just 3 posts in each series.
The series “Skepticism about near-term AGI” is general and tries to be accessible and interesting to a newcomer to these debates, although there may be some technical and inaccessible parts to some of them.
The post “3 reasons AGI might still be decades away” by Zershaaneh Qureshi on the 80,000 Hours blog is very quick and accessible, and I’d like to add it to the series, but it hasn’t been published on the EA Forum. I recommend that post too.
The other series “Criticism of specific accounts of imminent AGI” is very much inside baseball and might feel unimportant or inaccessible to newcomers to these debates. Each of the 3 posts is responding to something very specific in the AGI debates, and if you don’t know or care about that very specific thing, then you might not care about those posts. I think they are all excellent and necessary pieces of criticism, it’s just we’re really getting into the weeds at that point, so someone who isn’t caught up on the AGI debates might be totally confused. So, I’d recommend the “Skepticism about near-term AGI” series first.
To be clear, I think there is absolutely no intention of doing this. EA existed before AI became hot, and many EAs have expressed concerns about the recent, hard pivot towards AI. It seems in part, maybe mostly (?), to be a result of funding priorities. In fact, a feature of EA that hopefully makes it more immune than many impact focused communities to donor influence (although far from total immunity!) is the value placed on epistemics—decisions and priorities should be argued clearly and transparently, why AI should take priority over other cause areas. Glad to have you engage skeptically on this!