This all makes sense to me—I am fairly new but I also think that EAs already think a lot about the downsides of their actions (the pattern of “advantages of x minus disadvantages of x mean that the expected value is Y” seems pretty common, and rethink priorities portfolio builder tool (https://rethinkpriorities.org/publications/portfolio-builder-tool) also has “expected negative value” bits, and pe ple seem to care a lot about downstream ripple effects from e.g. health interventions. Are there some specific examples of EAs ignoring downsides that motivated this post?
Yes I believe we see some people in the community talking about downstream ripple effects. Here I wanted to point out that it was not in the values nor the principles of CEA which I believe would be important. I do not have specific examples (It seem to me that RP portfolio is made to reflect your opinion of risk not a factual risk). If I had to give some examples I would say that a lot of ea orgs aren’t clear about theyr choice and the possible downside of theyr work like Givewell, the life you can save, Giving green etc.. For example in neither of both Givewell or The Life You Can Save website’s you can find the GHD vs AW dilemna, in Giving green you can’t find possible downsides of advancing energy or decarbonising energy… I really appreciate the work that they are doing. The more problematic for me is companys that have a great impact like Open ai. They are making money devellopping a dangerous technology without acknowledging the risk they are taking. I realise my criticism isn’t worth a lot since I am to ideal for the real world where funders probably don’t want to hear about GHD vs AW. Nonetheless, I believe responsability should be part of the EA values and we should be more cautious about areas where the community has a great impact (because it took hold of neglected issues like AI)
This all makes sense to me—I am fairly new but I also think that EAs already think a lot about the downsides of their actions (the pattern of “advantages of x minus disadvantages of x mean that the expected value is Y” seems pretty common, and rethink priorities portfolio builder tool (https://rethinkpriorities.org/publications/portfolio-builder-tool) also has “expected negative value” bits, and pe ple seem to care a lot about downstream ripple effects from e.g. health interventions. Are there some specific examples of EAs ignoring downsides that motivated this post?
Hello,
Yes I believe we see some people in the community talking about downstream ripple effects. Here I wanted to point out that it was not in the values nor the principles of CEA which I believe would be important. I do not have specific examples (It seem to me that RP portfolio is made to reflect your opinion of risk not a factual risk). If I had to give some examples I would say that a lot of ea orgs aren’t clear about theyr choice and the possible downside of theyr work like Givewell, the life you can save, Giving green etc.. For example in neither of both Givewell or The Life You Can Save website’s you can find the GHD vs AW dilemna, in Giving green you can’t find possible downsides of advancing energy or decarbonising energy… I really appreciate the work that they are doing. The more problematic for me is companys that have a great impact like Open ai. They are making money devellopping a dangerous technology without acknowledging the risk they are taking. I realise my criticism isn’t worth a lot since I am to ideal for the real world where funders probably don’t want to hear about GHD vs AW. Nonetheless, I believe responsability should be part of the EA values and we should be more cautious about areas where the community has a great impact (because it took hold of neglected issues like AI)