I think the focus is generally placed on the cognitive capacities of AIs because it is expected that it will just be a bigger deal overall.
There is at least one 80,000 hours podcast episode on robotics. It tries to explain why it’s hard to do ML on, but I didn’t understand it.
Also, I think Max Tegmark wrote some stuff on slaughterbots in Life 3.0. Yikes!
You could try looking for other differential development stuff too if you want. I recently liked: AI Tools for Existential Security. I think it’s a good conceptual framework for emerging tech /​ applied ethics stuff I think. Of course, still leaves you with a lot of questions :)
I think the focus is generally placed on the cognitive capacities of AIs because it is expected that it will just be a bigger deal overall.
There is at least one 80,000 hours podcast episode on robotics. It tries to explain why it’s hard to do ML on, but I didn’t understand it.
Also, I think Max Tegmark wrote some stuff on slaughterbots in Life 3.0. Yikes!
You could try looking for other differential development stuff too if you want. I recently liked: AI Tools for Existential Security. I think it’s a good conceptual framework for emerging tech /​ applied ethics stuff I think. Of course, still leaves you with a lot of questions :)