[Question] What is most confusing to you about AI stuff?

I want to get a sense for what kinds of things EAs — who don’t spend most of their time thinking about AI stuff — find most confusing/​uncertain/​weird/​suspect/​etc. about it.

By “AI stuff”, I mean anything to do with how AI relates to EA.

For example, this includes:

  • What’s the best argument for prioritising AI stuff?, and

  • How, if at all, should I factor AI stuff into my career plans?

but doesn’t include:

  • How do neural networks work? (except inasmuch as it’s relevant for your understanding of how AI relates to EA).

Example topics: AI alignment/​safety, AI governance, AI as cause area, AI progress, the AI alignment/​safety/​governance communities, …

I encourage you to have a low bar for writing an answer! Short, off-the-cuff thoughts very welcome.