Thanks for sharing, good to read. I got most excited about 3, 6, 7, and 8.
As far as 6 goes, I would add that I think it would probably be good if AI Safety had a more mature academic publishing scene in general and some more legit journals. There is a place for the Alignment Forum, arXiv, conference papers, and such but where is Nature AI Safety or equivalent.
I think there is a lot to be said for basically raising the waterline there. I know there is plenty of AI Safety stuff that has been published for decades in perfectly respectable academic journals and such. I personally like the part in âComputing Machinery and Intelligenceâ where Turing says that we may need to rise up against the machines to prevent them from taking control.
Still, it is a space I want to see grow and flourish big time. In general, big ups to more and better journals, forums, and conferences within such fields as AI Safety /â Robustly Beneficial AI Research, Emerging Technologies Studies, Pandemic Prevention, and Existential Security.
EA forum, LW, and the Alignment Forum have their place, but these ideas ofc need to germinate out past this particular clique/âbubble/âsubculture. I think more and better venues for publishing are probably very net good in that sense as well.
7 is hard to think about but sounds potentially very high impact. If any billionaires ever have a scary ChatGPT interaction or a similar come to Jesus moment and google âhow to spend 10 billion dollars to make AI safeâ (or even ask Deep Research), then you could bias/â frame the whole discussion /â investigation heavily from the outset. I am sure there is plenty of equivalent googling by staffers and congresspeople in the process of making legislation now.
8 is right there with AI tools for existential security. I mostly agree that an AI product which didnât push forward AGI, but did increase fact checking would be good. This stuff is so hard to think about. There is so much moral hazard in the water and I feel like I am âvibe capturedâ by all the Silicon Valley money in the AI x-risk subculture.
Like, for example, I am pretty sure I donât think it is ethical to be an AGI scaling/âracing company even if Anthropic has better PR and vibes than Meta. Is it okay to be a fast follower though? Compete in terms of fact checking, sure but is making agents more reliable or teaching Claude to run a vending machine âsafetyâ or is that merely equivocation.
Should I found a synthetic virology unicorn, but we will be way chiller than other synthetic virology companies. And itâs not completely dis-analogous because there are medical uses for synthetic virology and pharma companies are also huge capital intensive high tech operations who spend 100s of millions on a single product. Still, that sounds awful.
Maybe you think armed balance of power with nuclear weapons is a legitimate use case. It would still be bad to do a nuclear bomb research company that lets you scale and reduces costs etc. for nuclear weapons. But idk. What if you really could put in a better control system than the other guy? Should hippies start military tech startups now?
Should I start a competing plantation that, in order to stay profitable and competitive with other slave plantations uses slave labor and does a lot of bad stuff. And if I assume that the demands of the market are fixed and this is pretty much the only profitable way to farm at scale, then so as long as I grow my wares at a lower cruelty per bushel than the average of my competitors am I racing to the top? It gets bad. Same thing could apply to factory farming.
(edit: I reread this comment and wanted to go more out of my way to say that I donât think this represents a real argument made presently or historically for chattel slavery. It was merely an offhand insensitive example of a horrific tension b/âw deontology and simple goodness on the one hand and a slice of galaxy brained utilitarian reasoning on the other.)
Like I said, so much moral hazard in the idea of âAGI company for good stuffâ, but I think I am very much in favor of âAI for AI Safetyâ and âAI tools for existential security. I like âfact checkingâ as a paradigm example of a prosocial use case.
Thanks for sharing, good to read. I got most excited about 3, 6, 7, and 8.
As far as 6 goes, I would add that I think it would probably be good if AI Safety had a more mature academic publishing scene in general and some more legit journals. There is a place for the Alignment Forum, arXiv, conference papers, and such but where is Nature AI Safety or equivalent.
I think there is a lot to be said for basically raising the waterline there. I know there is plenty of AI Safety stuff that has been published for decades in perfectly respectable academic journals and such. I personally like the part in âComputing Machinery and Intelligenceâ where Turing says that we may need to rise up against the machines to prevent them from taking control.
Still, it is a space I want to see grow and flourish big time. In general, big ups to more and better journals, forums, and conferences within such fields as AI Safety /â Robustly Beneficial AI Research, Emerging Technologies Studies, Pandemic Prevention, and Existential Security.
EA forum, LW, and the Alignment Forum have their place, but these ideas ofc need to germinate out past this particular clique/âbubble/âsubculture. I think more and better venues for publishing are probably very net good in that sense as well.
7 is hard to think about but sounds potentially very high impact. If any billionaires ever have a scary ChatGPT interaction or a similar come to Jesus moment and google âhow to spend 10 billion dollars to make AI safeâ (or even ask Deep Research), then you could bias/â frame the whole discussion /â investigation heavily from the outset. I am sure there is plenty of equivalent googling by staffers and congresspeople in the process of making legislation now.
8 is right there with AI tools for existential security. I mostly agree that an AI product which didnât push forward AGI, but did increase fact checking would be good. This stuff is so hard to think about. There is so much moral hazard in the water and I feel like I am âvibe capturedâ by all the Silicon Valley money in the AI x-risk subculture.
Like, for example, I am pretty sure I donât think it is ethical to be an AGI scaling/âracing company even if Anthropic has better PR and vibes than Meta. Is it okay to be a fast follower though? Compete in terms of fact checking, sure but is making agents more reliable or teaching Claude to run a vending machine âsafetyâ or is that merely equivocation.
Should I found a synthetic virology unicorn, but we will be way chiller than other synthetic virology companies. And itâs not completely dis-analogous because there are medical uses for synthetic virology and pharma companies are also huge capital intensive high tech operations who spend 100s of millions on a single product. Still, that sounds awful.
Maybe you think armed balance of power with nuclear weapons is a legitimate use case. It would still be bad to do a nuclear bomb research company that lets you scale and reduces costs etc. for nuclear weapons. But idk. What if you really could put in a better control system than the other guy? Should hippies start military tech startups now?
Should I start a competing plantation that, in order to stay profitable and competitive with other slave plantations uses slave labor and does a lot of bad stuff. And if I assume that the demands of the market are fixed and this is pretty much the only profitable way to farm at scale, then so as long as I grow my wares at a lower cruelty per bushel than the average of my competitors am I racing to the top? It gets bad. Same thing could apply to factory farming.
(edit: I reread this comment and wanted to go more out of my way to say that I donât think this represents a real argument made presently or historically for chattel slavery. It was merely an offhand insensitive example of a horrific tension b/âw deontology and simple goodness on the one hand and a slice of galaxy brained utilitarian reasoning on the other.)
Like I said, so much moral hazard in the idea of âAGI company for good stuffâ, but I think I am very much in favor of âAI for AI Safetyâ and âAI tools for existential security. I like âfact checkingâ as a paradigm example of a prosocial use case.