Thank you for your supportive comment. I think David Mathers is an exceptionally and commendably valuable contributor to the EA Forum in terms of engaging deeply with the substance of arguments around AI safety and AGI forecasting. David engages in discussions with a high level of reasoning transparency, which I deeply appreciate. It isn’t always clear to me why people who fall on the opposite side of debates around AI safety and AGI forecasting believe what they do, and talking to David has helped me understand this better. I would love to have more discussions about these topics with David, or with interlocutors like him. I feel as though there is still much work to be done in bringing the cruxes of these debates into sharp relief.
The EA Forum has a little-used “Dialogues” feature that I think has some potential. Anyone who would be interested in having a Dialogue on AGI forecasting and/or AGI safety should send me a private message.
On to the rest of your comment:
I think the current investments in AGI safety will end up being wasted. I think it’s a bit like paying philosophers in the 1920s to think about how to mitigate social media addiction, years before the first proper computer was built, and even before the concept of a Turing machine was formalized. There is simply too much unknown about how AGI might eventually be built.
Conversely, investments in narrow, prosaic “AI safety” like making LLM chatbots less likely to give people dangerous medical advice are modestly useful today but will have no applicability to AGI much later on. Other than having the name “AI” in common and running on computers using probably some sort of connectionist architecture, I don’t think today’s AI systems will have any meaningful resemblance to AGI, if it is eventually created.
Thank you for your supportive comment. I think David Mathers is an exceptionally and commendably valuable contributor to the EA Forum in terms of engaging deeply with the substance of arguments around AI safety and AGI forecasting. David engages in discussions with a high level of reasoning transparency, which I deeply appreciate. It isn’t always clear to me why people who fall on the opposite side of debates around AI safety and AGI forecasting believe what they do, and talking to David has helped me understand this better. I would love to have more discussions about these topics with David, or with interlocutors like him. I feel as though there is still much work to be done in bringing the cruxes of these debates into sharp relief.
The EA Forum has a little-used “Dialogues” feature that I think has some potential. Anyone who would be interested in having a Dialogue on AGI forecasting and/or AGI safety should send me a private message.
On to the rest of your comment:
I think the current investments in AGI safety will end up being wasted. I think it’s a bit like paying philosophers in the 1920s to think about how to mitigate social media addiction, years before the first proper computer was built, and even before the concept of a Turing machine was formalized. There is simply too much unknown about how AGI might eventually be built.
Conversely, investments in narrow, prosaic “AI safety” like making LLM chatbots less likely to give people dangerous medical advice are modestly useful today but will have no applicability to AGI much later on. Other than having the name “AI” in common and running on computers using probably some sort of connectionist architecture, I don’t think today’s AI systems will have any meaningful resemblance to AGI, if it is eventually created.