I agree EA is not just about maximising expected value, but I think that is a great principle. Connecting it to dishonesty and illegality seems pretty bad. Moreover, there are some rare examples where illegal actions are recognised as good, and one should be open to such cases.
I think the dishonesty and illegality is very relevant here because to most classical total utilitarians worried about extinction from AI, the dishonesty and illegality at FTX had positive EV
What✠This is not at all the case. The apparent fraud at FTX was massively bad in expectation (regardless of whether the consequences are evaluated from a classical utilitarian or a common-sense morality perspective).
But if you think AGI is very close then there isnât a lot of time for you to get caught, and there isnât a lot of time for future AI safety funding to emerge
And this is why utilitarianism is a framework, not something to follow blindly. Humans cannot do proper consequentialism. We are not smart enough. Thatâs why, when we do consequentialist reasoning and the result comes out âTherefore, we should steal billions of dollarsâ the correct response is not in fact to steal billions of dollars, but rather to treat the answer the same way you would if you concluded a car was travelling at 16,000 km/âh in a physics problemâyou sanity check the answer against common sense, realise you must have made a wrong turn somewhere, and go back. This has been talked about by EA or EA-adjacent people since before EA existed.
Yes agree, but as a utilitarian-esque movement grows, the chances of a member pursuing reckless blind utilitarianism also grows, so we need to give the ideas you describe more prominence within EA
To be honest, classical utilitarian or not, I find it hard for one to be confident about the sign of illegal and dishonest actions when these could potentially have both high downside and high upside.
For shortish AI timelines, super low chance of fraud being detected, only super wealthy people losing money, and the fraudulent people having little connection to EA, I can see fraud being positive ex ante. I can also see it being quite negative, e.g. if the likelihood of fraud being detected was 50 %, the people harmed were just ordinary citizens (harming the median US citizen is much worse in terms of reputation risks than quietly stealing money from billonaires), and the fraudulent people were strongly connected to EA.
For the specific case of FTX, I am not confident at all, but lean towards it being positive (I could easily see my sign change with only 30 min of research).
Hi,
I agree EA is not just about maximising expected value, but I think that is a great principle. Connecting it to dishonesty and illegality seems pretty bad. Moreover, there are some rare examples where illegal actions are recognised as good, and one should be open to such cases.
I think the dishonesty and illegality is very relevant here because to most classical total utilitarians worried about extinction from AI, the dishonesty and illegality at FTX had positive EV
What✠This is not at all the case. The apparent fraud at FTX was massively bad in expectation (regardless of whether the consequences are evaluated from a classical utilitarian or a common-sense morality perspective).
ETA: âQualy the lightbulbâ agrees:
Not if the fraud helps you fund more AI safety research, which has near infinite expected value, especially if you think AGI is imminent
Not if this just destroys momentum towards sustainable funding for AI safety and other longtermist causes.
But if you think AGI is very close then there isnât a lot of time for you to get caught, and there isnât a lot of time for future AI safety funding to emerge
And this is why utilitarianism is a framework, not something to follow blindly. Humans cannot do proper consequentialism. We are not smart enough. Thatâs why, when we do consequentialist reasoning and the result comes out âTherefore, we should steal billions of dollarsâ the correct response is not in fact to steal billions of dollars, but rather to treat the answer the same way you would if you concluded a car was travelling at 16,000 km/âh in a physics problemâyou sanity check the answer against common sense, realise you must have made a wrong turn somewhere, and go back. This has been talked about by EA or EA-adjacent people since before EA existed.
Yes agree, but as a utilitarian-esque movement grows, the chances of a member pursuing reckless blind utilitarianism also grows, so we need to give the ideas you describe more prominence within EA
To be honest, classical utilitarian or not, I find it hard for one to be confident about the sign of illegal and dishonest actions when these could potentially have both high downside and high upside.
For shortish AI timelines, super low chance of fraud being detected, only super wealthy people losing money, and the fraudulent people having little connection to EA, I can see fraud being positive ex ante. I can also see it being quite negative, e.g. if the likelihood of fraud being detected was 50 %, the people harmed were just ordinary citizens (harming the median US citizen is much worse in terms of reputation risks than quietly stealing money from billonaires), and the fraudulent people were strongly connected to EA.
For the specific case of FTX, I am not confident at all, but lean towards it being positive (I could easily see my sign change with only 30 min of research).