Iâm living in France. Learned about EA in 2018, found that great, digged a lot into the topic. The idea of âwhat in the world improves well-being or causes suffering the most, and what can we doâ really influenced me a whole lotâespecially when mixed with meditation that allowed me to be more active in my life.
One of the most reliable thing I have found so far is helping animal charities : farmed animals are much more numerous than humans (and have much worse living conditions), and there absolutely is evidence that animal charities are getting some improvements (especially from The Humane League). I tried to donate a lot there.
Long-termism could also be important, but I think that weâll hit energy limits before getting to an extinction eventâI wrote an EA forum post for that here: https://ââforum.effectivealtruism.org/ââposts/ââwXzc75txE5hbHqYug/ââthe-great-energy-descent-short-version-an-important-thing-ea
Thereâs something Iâd like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that).
Is this an element you considered?
Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference).
Iâd like to know what will be 80kâs position on that topic? (if this is public information)