My biggest takeaway from EA so far has been that the difference in expected moral value between the consensus choice and its alternative(s) can be vastly larger than I had previously thought.
I used to think that “common sense” would get me far when it came to moral choices. I even thought that the difference in expected moral value between the “common sense” choice and any alternatives was negligible, so much so that I made a deliberate decision not to invest time into thinking about my own values or ethics.
EA radically changed my opinion, and now I hold the view that the consensus view is frequently wrong, even when the stakes are high, and that is possible to make dramatically better moral decisions by approaching them with rationality, and a better-informed ethical framework.
Sometimes I come across people who are familiar with EA ideas but don’t particularly engage with them or the community. I often feel surprised, and I think the above is a big part of why. Perhaps more emphasis could be placed on this expected moral value gap in EA outreach?
I’ve found not many people bother to play arbitrage with prosocial outcomes.
You essentially need someone to care about prosocial outcomes, be very quantitative and strigent with calculations vs just going by concensus, and be sufficiently motivated to make life changes. In a way, being agreeable to care about others while being disagreeable enough to go against social concensus and gut feel.
My biggest takeaway from EA so far has been that the difference in expected moral value between the consensus choice and its alternative(s) can be vastly larger than I had previously thought.
I used to think that “common sense” would get me far when it came to moral choices. I even thought that the difference in expected moral value between the “common sense” choice and any alternatives was negligible, so much so that I made a deliberate decision not to invest time into thinking about my own values or ethics.
EA radically changed my opinion, and now I hold the view that the consensus view is frequently wrong, even when the stakes are high, and that is possible to make dramatically better moral decisions by approaching them with rationality, and a better-informed ethical framework.
Sometimes I come across people who are familiar with EA ideas but don’t particularly engage with them or the community. I often feel surprised, and I think the above is a big part of why. Perhaps more emphasis could be placed on this expected moral value gap in EA outreach?
I’ve found not many people bother to play arbitrage with prosocial outcomes.
You essentially need someone to care about prosocial outcomes, be very quantitative and strigent with calculations vs just going by concensus, and be sufficiently motivated to make life changes. In a way, being agreeable to care about others while being disagreeable enough to go against social concensus and gut feel.
Early adopters get to play a lot of arbitrage.