See if anything can be done to repair the situation. For example, consider returning any funds that were obtained from any fraudulent activities (which may require raising funds from others who were not involved), or maybe even offering legal aid for those affected by any fraud.
Re-evaluate whether EA per se is still a positive ideology to be promoting (as compared to promoting particular academic fields, like AI safety, and important ideas, like existential risk.
In any EA-like community that continues to exist, we need to encourage people to behave in a more trustworthy manner. People should be praised for considering a range of moral perspectives, and deferring to others views on risky projects, whereas we should distance those who have misbehaved—Eisenberg, Vassar, Reese, and so on.
Require more verification from people who tend to behave in a naive utilitarian way. So long as some naive utilitarians are liable to behave fraudulently, utilitarians should be held to a higher degree of transparency in entrepreneurial endeavours, in order to raise comparably sized investment rounds, or to take on comparably-prominent public roles. And this is just what utilitarians should reasonably expect, based on their track record.
Re-evaluate whether particular activities currently pursued by EA are meet these new standards of trustworthiness. It would be good to consider whether to press on with party-politics and youth outreach, for example.
Listen more to people who were rightly worried about FTX, and less to people who weren’t worried about it, or who worried excessively about other stuff that didn’t matter much.
EAs should:
Make clear that they oppose what has happened.
See if anything can be done to repair the situation. For example, consider returning any funds that were obtained from any fraudulent activities (which may require raising funds from others who were not involved), or maybe even offering legal aid for those affected by any fraud.
Re-evaluate whether EA per se is still a positive ideology to be promoting (as compared to promoting particular academic fields, like AI safety, and important ideas, like existential risk.
In any EA-like community that continues to exist, we need to encourage people to behave in a more trustworthy manner. People should be praised for considering a range of moral perspectives, and deferring to others views on risky projects, whereas we should distance those who have misbehaved—Eisenberg, Vassar, Reese, and so on.
Require more verification from people who tend to behave in a naive utilitarian way. So long as some naive utilitarians are liable to behave fraudulently, utilitarians should be held to a higher degree of transparency in entrepreneurial endeavours, in order to raise comparably sized investment rounds, or to take on comparably-prominent public roles. And this is just what utilitarians should reasonably expect, based on their track record.
Re-evaluate whether particular activities currently pursued by EA are meet these new standards of trustworthiness. It would be good to consider whether to press on with party-politics and youth outreach, for example.
Listen more to people who were rightly worried about FTX, and less to people who weren’t worried about it, or who worried excessively about other stuff that didn’t matter much.