You might want to read this is as a counter to AI doomerism: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case
This for a way to contribute to solving this problem without getting into alignment:
https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai
this too:
https://betterwithout.ai/pragmatic-AI-safety
and this for the case that we should stop using neural networks:
https://betterwithout.ai/gradient-dissent
You might want to read this is as a counter to AI doomerism: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case
This for a way to contribute to solving this problem without getting into alignment:
https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai
this too:
https://betterwithout.ai/pragmatic-AI-safety
and this for the case that we should stop using neural networks:
https://betterwithout.ai/gradient-dissent