As explored in this article here (https://forum.effectivealtruism.org/posts/tuMzkt4Fx5DPgtvAK/why-solving-existential-risks-related-to-ai-might-require), I as well as others I’m sure share the author’s opinion that current AI safety approaches do not work reliably. This is due to such efforts being misaligned in ways that are often invisible. For this reason, more research is necessary to come up with new approaches. The author expressed the opinion that the free market tends to prioritize progress in AI over safety because existential risk is an “externality”, and that consequently, philanthropic funding is useful for filling the funding gap.
As explored in this article here (https://forum.effectivealtruism.org/posts/tuMzkt4Fx5DPgtvAK/why-solving-existential-risks-related-to-ai-might-require), I as well as others I’m sure share the author’s opinion that current AI safety approaches do not work reliably. This is due to such efforts being misaligned in ways that are often invisible. For this reason, more research is necessary to come up with new approaches. The author expressed the opinion that the free market tends to prioritize progress in AI over safety because existential risk is an “externality”, and that consequently, philanthropic funding is useful for filling the funding gap.