Journalist and media studies professor, virtual communities consultant, climate change, urban planning and transit activist
drbrake đ¸
I hope you are right but you should be aware that the opposite may also be true. Depending on the weights we give AI in valuing human and non-human thriving, AI may discover new ways that would make humans happier at the expense of non-humans. There are people and organizations who would assign a moral weight of zero to the suffering of some of even all non-humans, and if those people win the argument then you might end up with an AI which is less to your taste than one that just emerges organically with basic guide rails.
For example, leaving aside second order effects on wider ecology, if you asked me how much intense suffering I would inflict on shrimp to save a human life, I would personally choose an almost unlimited amount.
While this is an important question to consider, it is by no means clear that we could get any short term consensus about how moral alignment should be implemented. In practical terms, if an AI (AGI) intelligence is more long lived and/âor more able to thrive and bring benefits to others of its kind, wouldnât it be moral to value its existence over that of a human being? Yet I think you would struggle to get AI scientists to embed that value choice into their code. Similarly, looking âdownâ the scale, in decisions where the lives or wellbeing of humans had to be balanced against animals, I am not sure there would be much likelihood of broad agreement on the relative value to attach to each in such cases.
I would encourage further research and advocacy on this point but at best this will be a long, long process. And you might not be happy with the eventual outcome!
At the moment there are no established guidelines in this area I am aware of in the existing not-AI-related space (though I have not looked hard...) but if AI-related research/âdiscussion did establish such guidelines, it might cause the guidelines to be propagated out into the rest of the policy world and set a precedent.
This does not address one possible use of alternative proteinsâfeeding them to domesticated carnivorous animals. Obviously many EA folks might prefer that we donât eat such animals or have them as pets but if we do it would be better if their food did not have an adverse climate impact or did not involve more animal suffering (or both!). Alt proteins here would not need to have the same taste as the foods they replace, be tasty to humans, or pass strict safety guidelinesâthey would just need to be minimally acceptable to (and digestible by/âsafe for) their âtargetâ animals. I recall those breeding insects as food (not alt proteins of course) are targeting this marketplace. Any thoughts? Research? Evidence of success?
You are rightâI wrote in anger and take that part back (have edited above).
The EA movement has no single leader but communication and recruitment are of course vital to its continuation, so there are mechanisms for senior figures to make their views known. It is not necessary for the movement to âtake sidesâ in particular political battles, but the fact that Musk has funded EA work, is friends with key EA figures and has taken actions (like the all-out attack on USAID) that run directly counter to mainstream EA thinking suggests to me EA needs to make its concerns clear.
If a public figure or organization (political or not) is aligned with the EA movement in the public mind (because of donations, common positions or their stated adherence to EA principles) and does things that are not consistent with EA values, the movement needs to condemn those actions.
Framing this as taking a political stand is misleading and misguided. I happen to oppose Muskâs politics but that is not why I urge EA leadership to oppose himâitâs the ethical lapses I expect EA to condemn. If a populist left wing leader in the US scrapped USAID because it was an instrument of American imperialism and the money was needed at home to fund social programming, Iâd argue EA should condemn that in a similar manner.
The EA moveÂment needs to be able to diÂsÂown rogue âsupÂportÂersâ, startÂing with Musk
âit does matter that there is one credible environmental org aligned with Democrats (there are also Republican climate orgs, like ClearPath) that pushes for it, it can make the difference between this being entirely dismissed as fossil fuel or Manchin demand to being an option that has support from clearly climate-motivated actors. â⌠actually, this is just one more reason why what the CATF is doing is retrograde. Supporting and aiding development of CCS for, say, cement making is OK in my book and there is plenty of room for experiments there that are directly applicable to future need. This podcast is good on this point. The danger is that learning how to capture emissions from near end of life coal plants in the US may not tell us all that much that is useful to deploy CCS where it is needed.
This is the kind of thing I would like to see more of. I would not invest myself because all investments seem to be in individual projectsâI would want to be able to invest in some fashion in a âbasketâ of companies and/âor projects (ideally through a large, well-known investment company like Vanguard...)
[Question] InÂvestÂing in cliÂmate mitiÂgaÂtion in Africa
I know your long run goals are the least âbindingâ but I would encourage you to be a little more cautious and evidence-based in your approach to growth as an intervention. Economic growth clearly offers benefits overall in developing countries but it would surely be safer to say your objective should be to study the relationship between economic growth and human development and work to understand the circumstances in which aid that enhances economic growth in particular circumstances is more effective than alternative forms of aid.
Youâve reminded me about Dollar Street: https://ââwww.gapminder.org/ââdollar-street/ââmatrix which does the same thing as Children Just Like Me but online and interactive.
EA gifts for kids?
Hadnât thought of thatâseems a likely explanation!
The discussion about Fistulas was here https://ââblog.givewell.org/ââ2008/ââ08/ââ20/ââfistula/ââ
In one of the discussions, a founder of Operation Fistula turned up. Itâs a horrible-sounding conditionâdescribed in the disability weighting as âhas an abnormal opening between her vagina and rectum causing flatulence and feces to escape through the vagina. The person gets infections in her vagina, and has pain when urinating.â Itâs caused if you donât have access to a C section when giving birth and can be remedied for $288. (The reportâs a little unclear, suggesting that the operation value lasts for 10 yearsâperhaps it stops working? Perhaps the lifespan of typical sufferers is in any case low?) Anyway, worth looking at if you are interested in this area.
Thanks for the additional readings. I think Paul Dolan is asking the right questions. I am disappointed that after a promising initial discussion eight years ago, Holden doesnât seem to have spoken again on the subject and to the best of my knowledge there is still no way on GiveWell to put different weights on âimpactâ to give different results.
I donât understand your last paragraph though. DALYs donât seem to measure economic effects on others at all, so if you do start to consider them wouldnât that be a big argument to make some DALYs negative?
GetÂting past the DALY: differÂent meaÂsures of âposÂiÂtive imÂpactâ
SufferÂing reÂlief vs life extension
NB smile.amazon.com works like smile.amazon.co.uk. Having written to Amazon Smile they responded: â We are currently working on expanding the AmazonSmile program to other countries.
You are correct in stating that customers can currently support organizations in one of the 50 United States, Germany, Austria, or the United Kingdom.â
I think @conflictaverse is on to something with his notion of âbrand confusionâ. I wish there was an easy, widely understood shorthand I could use that would indicate I am inspired by the âearlyâ EA project although I am now concerned that the movementâs centre of gravity seems to have drifted both organizationally and philosophically in directions I am not comfortable with. I am reluctant to abandon the EA label altogether, leaving it to that âwingâ of the movement.