earning-to-give, which I would consider even more reprehensible than SBF stealing money from FTX customers and then donating that to EA charities
AI capabilities EtG being morally worse than defrauding-to-give sounds like a strong claim.
There exist worlds where AI capabilities work is net positive. I appreciate that you may believe that we’re unlikely to be in one of those worlds (and I’m sure lots of people on this forum agree).
However, given this uncertainty, it seems surprising to see language as strong as “reprehensible” being used.
AI capabilities EtG being morally worse than defrauding-to-give sounds like a strong claim.
I mean, I do think causing all of humanity to go extinct is vastly worse than causing large-scale fraud. I of course think both are deeply reprehensible, but I also think that causing humanity’s extinction is vastly worse and justifies a much stronger response.
Of course, working on capabilities is a much smaller probabilistic increase in humanity’s extinction than SBF’s relatively direct fradulent activities, and I do think this means the average AI capabilities researcher is causing less harm than Sam. But someone founding an organization like OpenAI seems to me to have substantially worse consequences than Sam’s actions (of course, for fraud we often have clearer lines we can draw, and norm enforcement should take into account uncertainty and ambiguity as well as whole host of other considerations, and so I don’t actually support most people to react to someone working in capability labs to make money the same way as they would if they were to hear someone was participating in fraud, though I think both are deserving of a quite strong response).
There exist worlds where AI capabilities work is net positive.
I know very few people who have thought a lot about AI X-Risk who think that capability work marginally speeding up is good.
There was a bunch of disagreement on this topic for the last few years, but I think we are now close enough to AGI that almost everyone I know in the space would wish for more time, and for things to marginally slow down. The people who do still think that marginally speeding up is good exist, and there are arguments remaining, but there are of course also arguments for participating in many other atrocities, and the pure existence of someone of sane mind supporting an endeavor of course should not protect it from criticism and should not serve as a strong excuse to do something anyways.
There were extremely smart and reasonable people supporting the rise of the soviet union and the communist experiment and I of course think those people should be judged extremely negatively in-hindsight given the damage that has caused.
AI capabilities EtG being morally worse than defrauding-to-give sounds like a strong claim.
There exist worlds where AI capabilities work is net positive. I appreciate that you may believe that we’re unlikely to be in one of those worlds (and I’m sure lots of people on this forum agree).
However, given this uncertainty, it seems surprising to see language as strong as “reprehensible” being used.
I mean, I do think causing all of humanity to go extinct is vastly worse than causing large-scale fraud. I of course think both are deeply reprehensible, but I also think that causing humanity’s extinction is vastly worse and justifies a much stronger response.
Of course, working on capabilities is a much smaller probabilistic increase in humanity’s extinction than SBF’s relatively direct fradulent activities, and I do think this means the average AI capabilities researcher is causing less harm than Sam. But someone founding an organization like OpenAI seems to me to have substantially worse consequences than Sam’s actions (of course, for fraud we often have clearer lines we can draw, and norm enforcement should take into account uncertainty and ambiguity as well as whole host of other considerations, and so I don’t actually support most people to react to someone working in capability labs to make money the same way as they would if they were to hear someone was participating in fraud, though I think both are deserving of a quite strong response).
I know very few people who have thought a lot about AI X-Risk who think that capability work marginally speeding up is good.
There was a bunch of disagreement on this topic for the last few years, but I think we are now close enough to AGI that almost everyone I know in the space would wish for more time, and for things to marginally slow down. The people who do still think that marginally speeding up is good exist, and there are arguments remaining, but there are of course also arguments for participating in many other atrocities, and the pure existence of someone of sane mind supporting an endeavor of course should not protect it from criticism and should not serve as a strong excuse to do something anyways.
There were extremely smart and reasonable people supporting the rise of the soviet union and the communist experiment and I of course think those people should be judged extremely negatively in-hindsight given the damage that has caused.