This puts some of my concerns into a way better form than I could’ve produced so thank you kindly, if I got your piece correctly—this sort of innovation is concerning with GPT-4 not because it would necessarily produce x-risk adequate AGI but because next models treated the same way (open-access to API) would do so. This I agree with, people are actively pursuing whatever definition of AGI they conceive of for exciting capabilities, rather haphazardly, and we may end up stuck with some external individual, not even ~OpenAI developing something dangerous. I believe this is reflected somewhere in the LW/AISafety litany.
I’m curious also what happened to the LW post—I know they’ve increased their moderation standards, but it also says it was deleted by author? I always feel like the technical AI Safety discussion there is higher quality...
I appreciate the kind words :). Yes, it seems like you understood my main points. I also would definitely like feedback on the more technical aspects of the danger here and how far they extend.
These were my first posts on the EA forum and on LW so a content moderator had to approve both. EA forum happened almost right away and LW hasn’t yet sadly.
I really have no idea, I’m pretty nontechnical. I see it was recently accepted onto LW—best of luck getting a better response there, upvoted for visibility (also cuz I’m curious).
This puts some of my concerns into a way better form than I could’ve produced so thank you kindly, if I got your piece correctly—this sort of innovation is concerning with GPT-4 not because it would necessarily produce x-risk adequate AGI but because next models treated the same way (open-access to API) would do so. This I agree with, people are actively pursuing whatever definition of AGI they conceive of for exciting capabilities, rather haphazardly, and we may end up stuck with some external individual, not even ~OpenAI developing something dangerous. I believe this is reflected somewhere in the LW/AISafety litany.
I’m curious also what happened to the LW post—I know they’ve increased their moderation standards, but it also says it was deleted by author? I always feel like the technical AI Safety discussion there is higher quality...
I appreciate the kind words :). Yes, it seems like you understood my main points. I also would definitely like feedback on the more technical aspects of the danger here and how far they extend.
These were my first posts on the EA forum and on LW so a content moderator had to approve both. EA forum happened almost right away and LW hasn’t yet sadly.
I really have no idea, I’m pretty nontechnical. I see it was recently accepted onto LW—best of luck getting a better response there, upvoted for visibility (also cuz I’m curious).