Hi Trev!
Very briefly on your points:
We don’t think AI needs to break thermodynamics to be dangerous.
We don’t think all human-specified goals are safe, and we don’t know how to give a safe one to an extremely powerful AI.
We are not worried about self-awareness or consciousness in particular.
Consider familiarizing yourself with some of the basic arguments, for example using this playlist, “The Road to Superintelligence” and “Our Immortality or Extinction” posts on WaitBuyWhy for a fun, accessible introduction, and Vox’s “The case for taking AI seriously as a threat to humanity” as a high-quality mainstream explainer piece.
The free online Cambridge course on AGI Safety Fundamentals provides a strong grounding in much of the field and a cohort + mentor to learn with.[1]
AI Safety Support has been for a long time a remarkably active in-the-trenches group patching the many otherwise gaping holes in the ecosystem (someone who’s available to talk and help people get a basic understanding of the lie of the land from a friendly face, resources to keep people informed in ways which were otherwise neglected, support around fiscal sponsorship and coaching), especially for people trying to join the effort who don’t have a close connection to the inner circles where it’s less obvious that these are needed.
I’m sad to see the supporters not having been adequately supported to keep up this part of the mission, but excited by JJ’s new project: Ashgro.
I’m also excited by AI Safety Quest stepping up as a distributed, scalable, grassroots version of several of the main duties of AI Safety Support, which are ever more keenly needed with the flood of people who want to help as awareness spreads.