I… read this just today… and I was like wut???
...Until I saw the hat then the date XD
I… read this just today… and I was like wut???
...Until I saw the hat then the date XD
Thanks for making this, Willem and David! Really interesting. After seeing the IRI statements and then the mean scores, it didn’t initially look low to me, but when compared to the average IRI scores across the years, that’s certainly a difference! It made me a little sad actually. I wonder what it might look like at present. There’s a part of me that feels it can look similar still, despite the changes in the EA community. Also, while it might be relatively low-priority I for one would personally be fascinated/excited to see this being done again!
(Not an AI welfare/safety expert by any stretch, just adding my two cents here! Also I was very piqued by the banner and loved hovering over the footnote! I’ve thought about digital sentience, but this banner and this week really put me into a “hmm...” state)
My view leans towards “moderately disagree.” (I fluctuated between this, neutral, and slightly agree.) For context, when it’s AI safety, I’d say “highly agree.” Thoughts behind my current position:
Why I’d prioritize it less:
I consider myself longtermist, but I have always grappled with the opportunity costs of highly prioritizing more “speculative” areas. I care about high EV areas, I also grapple with deprioritizing very tangible cause areas with existing beings that have high EV too. When I looked at the table below, I’d lean towards giving more resources towards AW versus making AI welfare right now a priority.
I also think about how, if we divert more resources into AI welfare, I worry about the ramifications of diverting more EAs into a very dense, specialized sector. While this specialization is important, I am concerned that it might sometimes lead to a narrower focus that doesn’t fully account for the broader, interconnected systems of the world. In contrast, fields like biosecurity often consider a wider range of factors and have a more integrative perspective. This more holistic view can be crucial in addressing complex, multifaceted issues, and one reason I would prioritize AI welfare less is the opportunity costs towards areas that may be more holistic (not saying AI welfare has no reason to be considered holistic)
I have some concerns that trying to help AI right now might make things worse since we don’t fully know yet what’s being done now that can make things riskier? (Nathan said something to this effect in this thread).
I don’t know to what extent AI welfare is irreversible compared to unaligned AI
It seems less likely for multiplanetary civilizations to develop with advanced AI, reducing likelihood of AI systems across the universe, which reduces my prioritizing AI welfare on a universal scale
Why I’d still prioritize it:
I can’t see myself prescribing a 0% chance AI would be sentient, and I can’t see myself prescribing less than (edit:) 2% of resources and talent in effective altruism to something wide-scale I’d hold a possibility of being considered sentient, even if it might be less standard (i.e. more outside average moral circles) because of big value creation, just generally preventing suffering, and potentially preventing additional happiness, all of which I’m highly for.
I think exploratory and not very tapped-in work needs to be done more, and just establishing enough baseline infrastructure is important for this high EV type of cause (assuming we would say AI will be very widespread)
I like the trammell’s animal welfare analogy
Overall, I agree that resources and talent should be allocated to AI welfare because it’s prudent and can prevent future suffering. However, I moderately disagree with it being an EA priority due to its current speculative nature and how I think AI safety. I think AI safety and solving the alignment problem should be a priority, especially in these next few years, though, and hold some confidence in preventing digital suffering.
Other thoughts:
I wonder if there’d ever be a conflict between AI welfare and human welfare or the welfare of other beings. Haven’t put much thought here. Something that immediately comes to mind is if advanced AI systems would require substantial energy and infrastructure, potentially competing with human needs. From a utilitarian point of view, this presents a significant dilemma. However, there’s the argument that solving AI alignment could mitigate these issues, ensuring that AI systems are developed and managed in ways that do not harm human welfare. My current thinking is that there’s less likely potential for conflict between AI and human welfare if we solve the alignment problem and improve the policy infrastructure around AI. I might compare bioethics to historical precedents, showing that ethical alignment leads to better welfare outcomes
Some media that have made me truly feel for AI welfare are “I, Robot,” “Her,” Black Mirror’s “Joan is Awful,” and “Klara and the Sun”!
this is super helpful! would be cool if we can see %s given to insect sentience or other smaller sub cause areas like that. does anyone have access to that?
Love this tweet you shared
Ah okay good to know, thanks Henri!
Thanks for making this 🥺 honestly just reading you write words about RSI and not getting out of bed, and then having you even recommend rest, for some reason slaps me hard? 🥺
Ah okie cool, and yeah for sure!
Cool that you did this, Oscar! What made you make this?
It seems like, regarding EA engagement, there’s a significant impact in well-organized city groups in smaller countries, leading to a concentrated effect. I read up a bit on EA Estonia/Estonia as a result of this post (didn’t know much about them before this!), and they’re a relatively small country with concentrated efforts in key urban centers (Tallinn, the capital; Tartu, a university city). The synergy between the two seems to have the potential to create a concentrated and cohesive national EA network. The idea of cohesive communities <> smaller countries makes sense too.
Also, I imagine smaller countries with single/few concentrated influential unis/intellectual hubs can lead to higher EA visibility/network cohesivity/potential EA engagement. E.g. Estonia with University of Tartu? New Zealand and University of Auckland? Switzerland and ETH Zurich? Norway and University of Oslo? (People with more knowledge here, please correct me if I’m wrong!)
I just wanted to put it out here that, yes, my ex and I met in EA, and reflecting on our past, I appreciate the moments of growth and laughter we shared. as well as our work together. Though our paths have diverged, I’m grateful for the lessons learned and the memories we created together.
I think prestige has been one strategy in CB but not one wholly applied across CBs, it’s been one I noticed worked a lot in certain groups based on my experience
(Haven’t done complicated software ’cause one that was good doesn’t run on a Mac and others are just okay? Mac’s works but it’s quite slow)
I appreciate this! RSI is the bane of my existence HAHA. I do a bit of voice recognition but nothing too complicated. My friends make fun of me now, like I’m Louis Litt with his voice recorder 😆 what software do you use?
Charlotte! So nice to hear from you, you’re definitely one of the loveliest people I’ve met in EA. I’d love to talk about it one day maybe! You’re right that it’s frustrating to have this mix of emotions. I just decided I had too many negative ones to feel I was being productive — after writing this post I’ve been immensely happier and more productive!
Hope you’re well :)
Hey, super appreciate this! I agree. I’ve gotten sooo many echoes of solidarity from others (people in and people who have left EA) but they were all private, and I understand why
I appreciate that! I definitely believe I am EA in ideals but I just felt immediate relief identifying as EA adjacent because it made me feel more solid in the ideals without having to interact so deeply with the community
Hi Jessica! I also was happy to work with you. Thanks for commenting. I want to reiterate that I understood this decision and why it was done, but I can’t say it made me feel good (esp when it happened. Maybe one good way to describe it was it felt CEA had favorite kids). And I’ve gotten lots of private messages after this post voicing out similar sad feelings. As someone who does believe in effective decision-making and impartiality in this, I really just understood and accepted it.
I think in my post I was trying to voice out my feelings of sadness I’ve held in, of different aspects of EA and EA CB. Some people can easily make their emotions in tune with their rationally held beliefs. I’m not exactly like that — so despite understanding why CEA did it, it still made me sad about who I was at that period of time. It didn’t mean I couldn’t get into an Ivy League in the future, but it did mean I wasn’t an Ivy League then (not that I hadn’t thought about it, many factors just made it so that college had to be where I was based in), and that automatically made an invisible barrier between me and my Ivy League colleagues.
I agree with some sentiments of others in this comment section — that it plays to the system, and I guess that’s somewhat the fastest most effective way sometimes. But it does make me sad because it makes me feel that so many people in this theoretical future are bound to the status quo.
I am partially sad that a lot of people seem to be missing the point. It kinda proves the point I was trying to make
So exciting!
A lot of people have said sharing these notes were helpful, so sharing it here on the EAF! Here are notes on NTI | bio’s recent event with Dr. Lu Borio on H5N1 Bird Flu, in case anyone here would find it helpful!