Thanks for this post Julia! I really related to some parts of it, while other parts were very different from my experience. I’ll take this opportunity to share a draft I wrote sometime last year, since I think it’s in a similar spirit:
I used to be pretty uncomfortable with, and even mad about, the prominence of AI safety in EA. I always saw the logic – upon reading the sequences circa 2012, I quickly agreed that creating superintelligent entities not perfectly aligned with human values could go really, really badly, so of course AI safety was important in that sense – but did it really have to be such a central part of the EA movement, which (I felt) could otherwise have much wider acceptance and thus save more children from malaria? Of course, it would be worth allowing some deaths now to prevent a misaligned AI from killing everyone, so even then I didn’t object exactly, but I was internally upset about the perception of my movement and about the dead kids.
I don’t feel this way anymore. What changed?
[people aren’t gonna like EA anyways – I’ve gotten more cynical and no longer think that AI was necessarily their true objection]
[AI safety more concrete now – the sequences were extremely insistent but without much in the way of actual asks, which is an unsettling combo all by itself. Move to Berkeley? Devote your life to blogging about ethics? Spend $100k on cryo? On some level those all seemed like the best available ways to prove yourself a True Believer! I was willing to lowercase-b believe, but wary of being a capital-B Believer, which in the absence of actual work to do is the only way to signal that you understand the Most Important Thing In The World]
[practice thinking about the general case, longtermism]
Unfortunately I no longer remember exactly what I was thinking with #3, though I could guess. #1 and #2 still make sense to me and I could try to expand on them if they’re not clear to others.
Thinking about it now, I might add something like:
4. [better internalization of the fact that EA isn’t the only way to do good lol – people who care about global health and wouldn’t care about AI are doing good work in global health as we speak]
Thanks for this post Julia! I really related to some parts of it, while other parts were very different from my experience. I’ll take this opportunity to share a draft I wrote sometime last year, since I think it’s in a similar spirit:
Unfortunately I no longer remember exactly what I was thinking with #3, though I could guess. #1 and #2 still make sense to me and I could try to expand on them if they’re not clear to others.
Thinking about it now, I might add something like:
4. [better internalization of the fact that EA isn’t the only way to do good lol – people who care about global health and wouldn’t care about AI are doing good work in global health as we speak]
Yes, the drive to prove you Belong is another one of those under-the-surface things that’s surprisingly powerful!