The reasons you provide would already be sufficient for me to think that AI safety will not be an easy problem to solve. To add one more example to your list:
We don’t know yet if LLMs will be the technology that will reach AGI, it could also be a number of other technologies that just like LLMs make a certain breakthrough and then suddenly become very capable. So just looking at what we see develop now and extrapolating from the currently most advanced model is quite risky.
For the second part about your concern about the welfare of AIs themselves, I think this is something very hard for us to imagine, we anthropomorphize AI, so words like ‘exploit’ or ‘abuse’ make sense in a human context where beings experience pain and emotions, but in the context of AI those might just not apply. But I would say in this area I still know very little so I’m mainly repeating what I read is a common mistake to make when judging morality in regards to AI.
Thanks for this reply! That makes sense. Do you know how likely people in the field think it is that AGI will come from just scaling up LLMs vs requiring some big new conceptual breakthrough? I hear people talk about this question but don’t have much sense about what the consensus is among the people most concerned about AI safety (if there is a consensus).
Since these developments are really bleeding edge I don’t know who is really an “expert” I would trust on evaluating it.
The closest to answering your question is maybe this recent article I came across on hackernews, where the comments are often more interesting then the article itself: https://news.ycombinator.com/item?id=35603756
If you read through the comments which mostly come from people that follow the field for a while they seem to agree that it’s not just “scaling up the existing model we have now”, mainly because of cost reasons, but that’s it’s going to be doing things more efficiently than now. I don’t have enough knowledge to say how difficult this is, if those different methods will need to be something entirely new or if it’s just a matter of trying what is already there and combining it with what we have.
The article itself can be seen skeptical, because there are tons of reasons OpenAIs CEO has to issue a public statement and I wouldn’t take anything in there at face value. But the comments are maybe a bit more trustworthy / perspective giving.
The reasons you provide would already be sufficient for me to think that AI safety will not be an easy problem to solve. To add one more example to your list:
We don’t know yet if LLMs will be the technology that will reach AGI, it could also be a number of other technologies that just like LLMs make a certain breakthrough and then suddenly become very capable. So just looking at what we see develop now and extrapolating from the currently most advanced model is quite risky.
For the second part about your concern about the welfare of AIs themselves, I think this is something very hard for us to imagine, we anthropomorphize AI, so words like ‘exploit’ or ‘abuse’ make sense in a human context where beings experience pain and emotions, but in the context of AI those might just not apply. But I would say in this area I still know very little so I’m mainly repeating what I read is a common mistake to make when judging morality in regards to AI.
Thanks for this reply! That makes sense. Do you know how likely people in the field think it is that AGI will come from just scaling up LLMs vs requiring some big new conceptual breakthrough? I hear people talk about this question but don’t have much sense about what the consensus is among the people most concerned about AI safety (if there is a consensus).
Since these developments are really bleeding edge I don’t know who is really an “expert” I would trust on evaluating it.
The closest to answering your question is maybe this recent article I came across on hackernews, where the comments are often more interesting then the article itself:
https://news.ycombinator.com/item?id=35603756
If you read through the comments which mostly come from people that follow the field for a while they seem to agree that it’s not just “scaling up the existing model we have now”, mainly because of cost reasons, but that’s it’s going to be doing things more efficiently than now. I don’t have enough knowledge to say how difficult this is, if those different methods will need to be something entirely new or if it’s just a matter of trying what is already there and combining it with what we have.
The article itself can be seen skeptical, because there are tons of reasons OpenAIs CEO has to issue a public statement and I wouldn’t take anything in there at face value. But the comments are maybe a bit more trustworthy / perspective giving.