But if you think AGI is very close then there isn’t a lot of time for you to get caught, and there isn’t a lot of time for future AI safety funding to emerge
And this is why utilitarianism is a framework, not something to follow blindly. Humans cannot do proper consequentialism. We are not smart enough. That’s why, when we do consequentialist reasoning and the result comes out “Therefore, we should steal billions of dollars” the correct response is not in fact to steal billions of dollars, but rather to treat the answer the same way you would if you concluded a car was travelling at 16,000 km/h in a physics problem—you sanity check the answer against common sense, realise you must have made a wrong turn somewhere, and go back. This has been talked about by EA or EA-adjacent people since before EA existed.
Yes agree, but as a utilitarian-esque movement grows, the chances of a member pursuing reckless blind utilitarianism also grows, so we need to give the ideas you describe more prominence within EA
But if you think AGI is very close then there isn’t a lot of time for you to get caught, and there isn’t a lot of time for future AI safety funding to emerge
And this is why utilitarianism is a framework, not something to follow blindly. Humans cannot do proper consequentialism. We are not smart enough. That’s why, when we do consequentialist reasoning and the result comes out “Therefore, we should steal billions of dollars” the correct response is not in fact to steal billions of dollars, but rather to treat the answer the same way you would if you concluded a car was travelling at 16,000 km/h in a physics problem—you sanity check the answer against common sense, realise you must have made a wrong turn somewhere, and go back. This has been talked about by EA or EA-adjacent people since before EA existed.
Yes agree, but as a utilitarian-esque movement grows, the chances of a member pursuing reckless blind utilitarianism also grows, so we need to give the ideas you describe more prominence within EA