This puts some of my concerns into a way better form than I could’ve produced so thank you kindly, if I got your piece correctly—this sort of innovation is concerning with GPT-4 not because it would necessarily produce x-risk adequate AGI but because next models treated the same way (open-access to API) would do so. This I agree with, people are actively pursuing whatever definition of AGI they conceive of for exciting capabilities, rather haphazardly, and we may end up stuck with some external individual, not even ~OpenAI developing something dangerous. I believe this is reflected somewhere in the LW/AISafety litany.
I’m curious also what happened to the LW post—I know they’ve increased their moderation standards, but it also says it was deleted by author? I always feel like the technical AI Safety discussion there is higher quality...
I appreciate the kind words :). Yes, it seems like you understood my main points. I also would definitely like feedback on the more technical aspects of the danger here and how far they extend.
These were my first posts on the EA forum and on LW so a content moderator had to approve both. EA forum happened almost right away and LW hasn’t yet sadly.
I really have no idea, I’m pretty nontechnical. I see it was recently accepted onto LW—best of luck getting a better response there, upvoted for visibility (also cuz I’m curious).
One recent piece of evidence that updated me further towards “many smart people are quite confused about the problem and in particular anthropomorphize current AI systems a lot” was Lex Friedman’s conversation with Eliezer (e.g. at 1:02:36):
im not by any means a tech person, and dont know much of software or coding. i view myself as a common sense thinker, which in my experience most technical geniuses really cant do well or wont do when going after a hard problem to solve like this, which is very worrying in this matter. there is only one evolutional path for this tech in my opinion and at the end of that path something extremely inteligent and alien is embodied there that has the ability to manufacture nano hardware and self replacating software capable of advancement with every iteration, that part is probably far in the future tho. the present worrying thing is the advent of this software in the bio-tech field, which if used in a wrong or malcious way would and could be extremely dangerous. at the end of the day we have to ask ourselfs, is it worth it? human beings dont have the best track record when it comes to alot of the decisions we’ve made in the past and the ramifacations of this one could be a vary very big and final ouch in the long run.
This puts some of my concerns into a way better form than I could’ve produced so thank you kindly, if I got your piece correctly—this sort of innovation is concerning with GPT-4 not because it would necessarily produce x-risk adequate AGI but because next models treated the same way (open-access to API) would do so. This I agree with, people are actively pursuing whatever definition of AGI they conceive of for exciting capabilities, rather haphazardly, and we may end up stuck with some external individual, not even ~OpenAI developing something dangerous. I believe this is reflected somewhere in the LW/AISafety litany.
I’m curious also what happened to the LW post—I know they’ve increased their moderation standards, but it also says it was deleted by author? I always feel like the technical AI Safety discussion there is higher quality...
I appreciate the kind words :). Yes, it seems like you understood my main points. I also would definitely like feedback on the more technical aspects of the danger here and how far they extend.
These were my first posts on the EA forum and on LW so a content moderator had to approve both. EA forum happened almost right away and LW hasn’t yet sadly.
I really have no idea, I’m pretty nontechnical. I see it was recently accepted onto LW—best of luck getting a better response there, upvoted for visibility (also cuz I’m curious).
Great post (especially for a first one, kudos)!
One recent piece of evidence that updated me further towards “many smart people are quite confused about the problem and in particular anthropomorphize current AI systems a lot” was Lex Friedman’s conversation with Eliezer (e.g. at 1:02:36):
im not by any means a tech person, and dont know much of software or coding. i view myself as a common sense thinker, which in my experience most technical geniuses really cant do well or wont do when going after a hard problem to solve like this, which is very worrying in this matter. there is only one evolutional path for this tech in my opinion and at the end of that path something extremely inteligent and alien is embodied there that has the ability to manufacture nano hardware and self replacating software capable of advancement with every iteration, that part is probably far in the future tho. the present worrying thing is the advent of this software in the bio-tech field, which if used in a wrong or malcious way would and could be extremely dangerous. at the end of the day we have to ask ourselfs, is it worth it? human beings dont have the best track record when it comes to alot of the decisions we’ve made in the past and the ramifacations of this one could be a vary very big and final ouch in the long run.