It’s interesting that all the aforementioned examples of why EA does good concretely are all pertaining to global health and development, while EA is becoming highly skewed towards AI risks and longtermist causes, where it is going to be much more difficult to justify the good that can potentially be done. Advocating for EA will be much more difficult in the coming years, sadly.
I agree it might be more difficult, but there are steps I think that could make the advocacy more easy. Obviously there are always tradeoffs here.
Having a more compassionate and caring tone when talking about X-risk causes. I think EA has a bit of a tone problem when it comes to outward facing materials. For example the 80,000 hours page is friendly and very well communicated, but there are few (if any) warm and compassionate vibes. The idea that EAs are into X-risks mitigation because they really care about people and the future humanity in general could be more front and center.
The climate change movement for example talks about things like “creating a positive future for our grandchildren”, maybe we could take a leaf out of that book.
Acknowledge and lean into the good vibes Global Health and development stuff gives out by putting it a bit more front and center, even if it means sacrificing pure epistemic integrity at times.
Nick—yes, absolutely. The main PR problem with longtermism and X risk is that we haven’t quite found the most effective ways to express kindness and benevolence towards future people, including our own kids, grandkids, and descendants. I agree that ‘creating a positive future for our grandchildren’ is a good start.
As a rabid pronatalist, I’ve noticed that EAs often seem quite reluctant to advocate for ‘selfish’ emphasis on kids, families, and lineages… as if that’s an unseemly shrinking of the ‘moral circle’. But most adults are parents, and most parents care deeply about the world that their kids will inhabit. I think we have to be willing to reframe X risk minimization as concrete parental protectiveness, rather than some abstract concern for generic ‘future people’.
I agree with you Nick, when you say that we should present AI risks in a much more human way, I just don’t think that it’s the path taken by the loudest voices concerning AI risks right now, and that’s a shame. And I see no incompatibility between good epistemics and wanting to make the field of AI safety more inclusive and kind so that it includes everybody and not just software engineers who went into EA because there was money (see the post on the great amount of funding going to AI safety positions that are paid x3 compared to researchers working in hospitals etc), and prestige (they’ve been into ML for so long and now is their chance to get opportunities and recognition). I want to dive deeper into how much EA-oriented are these new EAs if we talk about the core-values that have created the EA movement.
On a constructive note, as a community builder, I am raising projects from the ground whose aim to focus on the role of AI risks in regards to soaring inequalities or possibility of increasing the likelihood of AI being used by a tyrannic power, themes that have a clear signalling into impact for everyone, rather than staying in the realm of singletons and other abstract figures because it’s just intellectually satisfying to think about these things.
Yeah I love that, I agree that communicating well about the inequality, authoritarian and violence risks that AI could present is another potentially great angle, even if it that doesn’t describe the X-risk we are most worry about
It’s interesting that all the aforementioned examples of why EA does good concretely are all pertaining to global health and development, while EA is becoming highly skewed towards AI risks and longtermist causes, where it is going to be much more difficult to justify the good that can potentially be done. Advocating for EA will be much more difficult in the coming years, sadly.
I agree it might be more difficult, but there are steps I think that could make the advocacy more easy. Obviously there are always tradeoffs here.
Acknowledge and lean into the good vibes Global Health and development stuff gives out by putting it a bit more front and center, even if it means sacrificing pure epistemic integrity at times.
Nick—yes, absolutely. The main PR problem with longtermism and X risk is that we haven’t quite found the most effective ways to express kindness and benevolence towards future people, including our own kids, grandkids, and descendants. I agree that ‘creating a positive future for our grandchildren’ is a good start.
As a rabid pronatalist, I’ve noticed that EAs often seem quite reluctant to advocate for ‘selfish’ emphasis on kids, families, and lineages… as if that’s an unseemly shrinking of the ‘moral circle’. But most adults are parents, and most parents care deeply about the world that their kids will inhabit. I think we have to be willing to reframe X risk minimization as concrete parental protectiveness, rather than some abstract concern for generic ‘future people’.
I agree with you Nick, when you say that we should present AI risks in a much more human way, I just don’t think that it’s the path taken by the loudest voices concerning AI risks right now, and that’s a shame. And I see no incompatibility between good epistemics and wanting to make the field of AI safety more inclusive and kind so that it includes everybody and not just software engineers who went into EA because there was money (see the post on the great amount of funding going to AI safety positions that are paid x3 compared to researchers working in hospitals etc), and prestige (they’ve been into ML for so long and now is their chance to get opportunities and recognition). I want to dive deeper into how much EA-oriented are these new EAs if we talk about the core-values that have created the EA movement.
On a constructive note, as a community builder, I am raising projects from the ground whose aim to focus on the role of AI risks in regards to soaring inequalities or possibility of increasing the likelihood of AI being used by a tyrannic power, themes that have a clear signalling into impact for everyone, rather than staying in the realm of singletons and other abstract figures because it’s just intellectually satisfying to think about these things.
Yeah I love that, I agree that communicating well about the inequality, authoritarian and violence risks that AI could present is another potentially great angle, even if it that doesn’t describe the X-risk we are most worry about
Classic x-risk concerns (the murder of all humans) seem pretty violent to me.
For sure, that’s mainly my point in that the communication line could be more about preventing “death and violence” rather than “mitigating x risk”.
And yeah I was talking about a different context of AI enabled violence than x risk, but my point is about how we communicate, not the outcome.