This is the story of how I started to care about AI risk. It’s far from an ideal decision-making process, but I wanted to try to spell out the untidy reality.
I first learned about the idea of AI risk by reading a lot of LessWrong in the summer of 2011. I didn’t like the idea of directing resources toward it. I didn’t spell out my reasons to myself at the time, but here’s what I think was going on under the surface:
I was already really dedicated to global health as a cause area, and didn’t want competition with that.
The concrete thing you could do about AI risk seemed to be “donate to MIRI,” and I didn’t understand what MIRI was doing or how it was going to help.
These people all seemed to be California tech guys, and that wasn’t my culture.
My explicit thoughts were something like:
Well yeah, I can see how misaligned AI might be the end of everything
But maybe that wouldn’t be so bad; seems like there’s a lot of suffering in the world
Anyway, I don’t know what we’re really going to do about it.
In 2017, a coworker/friend who had worked at an early version of MIRI talked to some of her old friends and got particularly worried about short AI timelines. And seeing her level of concern clicked with me. She wasn’t a California tech guy; she was a former civil rights lawyer from Detroit. She was a Giving What We Can member. She felt like My People.
And I started to take it seriously. I started to feel viscerally that it could be very bad for everything I cared about if we developed superhuman AI and we weren’t ready.
Once I started caring about this area a lot, I took a fresh look around at what might be done about it. In the time since I’d first encountered the idea, more people had also started taking it seriously. Now there were more projects like AI policy work that I found easier to comprehend.
Two other things that shifted over time:
My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally. [edited to add: I don’t know which of these impressions is more accurate — just noting that my sense of the external world shifts depending on my internal state.]
Once I had children, I had a gut-level feeling that it was extremely important that they have long and healthy lives.
Changing my beliefs didn’t mean there were especially good actions to take. Once I changed my view on AI safety I was more willing to donate to that area, but a lot of people had the same idea, and there wasn’t/isn’t a lot of obvious work that wasn’t already funded. So I’ve continued donating to a mix of global health (which I still really value) and EA community-building. I was already doing cause-general work and didn’t think I could be more useful in direct work, but I started to encourage other people to consider work on global catastrophic risks.
Reflections now:
What subculture you belong to doesn’t mean much about how right you are about something. Subcultures / echo chambers develop different ideas from the mainstream, some which will be valuable and many which will be pointless or harmful. (LessWrong was also very into cryonics at the time, and I think it’s right for that idea to get a lot less attention than AI safety.)
One downside of a homogeneous culture is that other people may bounce off for tribalistic reasons.
Because you don’t share the same concerns, and don’t speak to the things they care about
Because they’re put off in some basic social or demographic way, and never seriously listen to you in the first place
When I think about what could have alerted me that my thinking was driven by group identity more than by logic, what comes to mind is the feeling of annoyance I had about “AI people.”
Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)
Crossposted from Otherwise
This is the story of how I started to care about AI risk. It’s far from an ideal decision-making process, but I wanted to try to spell out the untidy reality.
I first learned about the idea of AI risk by reading a lot of LessWrong in the summer of 2011. I didn’t like the idea of directing resources toward it. I didn’t spell out my reasons to myself at the time, but here’s what I think was going on under the surface:
I was already really dedicated to global health as a cause area, and didn’t want competition with that.
The concrete thing you could do about AI risk seemed to be “donate to MIRI,” and I didn’t understand what MIRI was doing or how it was going to help.
These people all seemed to be California tech guys, and that wasn’t my culture.
My explicit thoughts were something like:
Well yeah, I can see how misaligned AI might be the end of everything
But maybe that wouldn’t be so bad; seems like there’s a lot of suffering in the world
Anyway, I don’t know what we’re really going to do about it.
In 2017, a coworker/friend who had worked at an early version of MIRI talked to some of her old friends and got particularly worried about short AI timelines. And seeing her level of concern clicked with me. She wasn’t a California tech guy; she was a former civil rights lawyer from Detroit. She was a Giving What We Can member. She felt like My People.
And I started to take it seriously. I started to feel viscerally that it could be very bad for everything I cared about if we developed superhuman AI and we weren’t ready.
Once I started caring about this area a lot, I took a fresh look around at what might be done about it. In the time since I’d first encountered the idea, more people had also started taking it seriously. Now there were more projects like AI policy work that I found easier to comprehend.
Two other things that shifted over time:
My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally. [edited to add: I don’t know which of these impressions is more accurate — just noting that my sense of the external world shifts depending on my internal state.]
Once I had children, I had a gut-level feeling that it was extremely important that they have long and healthy lives.
Changing my beliefs didn’t mean there were especially good actions to take. Once I changed my view on AI safety I was more willing to donate to that area, but a lot of people had the same idea, and there wasn’t/isn’t a lot of obvious work that wasn’t already funded. So I’ve continued donating to a mix of global health (which I still really value) and EA community-building. I was already doing cause-general work and didn’t think I could be more useful in direct work, but I started to encourage other people to consider work on global catastrophic risks.
Reflections now:
What subculture you belong to doesn’t mean much about how right you are about something. Subcultures / echo chambers develop different ideas from the mainstream, some which will be valuable and many which will be pointless or harmful. (LessWrong was also very into cryonics at the time, and I think it’s right for that idea to get a lot less attention than AI safety.)
One downside of a homogeneous culture is that other people may bounce off for tribalistic reasons.
Because you don’t share the same concerns, and don’t speak to the things they care about
Because they’re put off in some basic social or demographic way, and never seriously listen to you in the first place
When I think about what could have alerted me that my thinking was driven by group identity more than by logic, what comes to mind is the feeling of annoyance I had about “AI people.”