I think classical total utilitarianism + short-ish AI timelines + longtermism unavoidably endorses widespread fraud to fund AI safety research.
Yes, I can see this being true for some cases, especially if the people whose money is lost are pretty wealthy (e.g. billionaires), and the likelihood of the fraud being detected is super low (e.g. 10^-6). For these cases, I think we should be open to fraud being good. However, this does not mean at all endorsing fraud, because that would have pretty bad effects, and would be incompatible with the probability of fraud being detected being super low.
Yes, I can see this being true for some cases, especially if the people whose money is lost are pretty wealthy (e.g. billionaires), and the likelihood of the fraud being detected is super low (e.g. 10^-6). For these cases, I think we should be open to fraud being good. However, this does not mean at all endorsing fraud, because that would have pretty bad effects, and would be incompatible with the probability of fraud being detected being super low.