First I feel like you are conflating 2 issues here. You start and finish by talking about PR, but in the middle you argue the important of the future I think it’s important to separate these two issues to avoid confusion, I’ll just discuss the PR angle
I disagree and think there’s a smallish but significant risk of PR badness here. From my experience talking to even my highly educated friends who aren’t into EA, they find it very strange that money is invested into researching the welfare of future AI minds at all and often flat out disagree that money should be spent on that. That indicates to me (weakly from anecdata) that there is at least some PR risk here.
I also think there are pretty straightforward framings like “millions poured into welfare of robot minds which don’t Even exist yet” which could certainly be bad for PR. If I were anti EA I could write a pretty good hit piece about rich people in silicon valley prioritizing their digital mind AI hobby horse ahead of millions of real minds that are suffering right now.
What are your grounds for thinking that this has a almost insignificant chance of being “PR costly”?
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of the many working in areas not defunded, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
“defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem,”
That said I probably do agree with this...
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path”
But don’t want to conflate that with the PR risk....
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of global health and animal welfare people, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
For what it’s worth, as a minor point, the animal welfare issues I think are most important, and the interventions I suspect are the most cost-effective right now (e.g. shrimp stunning), are basically only fundable because of EA being weird in the past and willing to explore strange ideas. I think some of this does entail genuine PR risk in certain ways, but I don’t think we would have gotten most of the most valuable progress that EA has made for animal welfare if we paid attention to PR between 2010 and 2021, and the animal welfare space would be much worse off. That doesn’t mean PR shouldn’t be a consideration now, but as a historical matter, I think it is correct that impact in the animal space has largely been driven by comfort with weird ideas. I think the new funding environment is likely a lot worse for making meaningful progress on the most important animal welfare issues.
The “non-weird” animal welfare ideas that are funded right now (corporate chicken campaigns and alternative proteins?) were not EA innovations and were already being pursued by non-EA animal groups when EA funding entered the space. If these are the best interventions OpenPhil can fund due to PR concerns, animals are a lot worse off.
I personally would rather more animal and global health groups distanced themselves from EA if there were PR risks, than EA distancing itself from PR risks. It seems like groups could just make determinations about the right strategies for their own work with regard to PR, instead of there being top down enforcement of a singular PR strategy, which I think is likely what this change will mostly cause. E.g. I think that the EA-side origins of wild animal welfare work are highly risky from a PR angle, but the most effective implementation of them, WAI, both would not have occurred without that PR risky work (extremely confident), and is now exceedingly normal / does not pose a PR risk to EA at all (fairly confident) nor does EA pose one to it (somewhat confident). It just reads as just a normal wild animal scientific research group to basically any non-EA who engages with it.
Thanks for the reply! I wasnt actually aware that animal welfare has run into major PR issues. I didn’t think the public took much interest in wild animal or shrimp welfare. I probably missed it but would be interested to see the articles / hit pieces.
I don’t think how “weird” something is necessarily correlates to PR risk. It’s definitely a factor but there are others too. For example buying Wytham Abbey wasn’t weird, but appeared to many in the public at least inconsistent with EA values.
I agree that I make two separate points. I think evaluating digital sentience seems pretty important from a “try to be a moral person” perspective, and separately, I think it’s just a very reasonable and straightforward question to ask that I expect smart people to be interested in and where smart people will understand why someone might want to do research on this question. Like, sure, you can frame everything in some horribly distorting way, and find some insult that’s vaguely associated with that framing, but I don’t think that’s very predictive of actual reputational risk.
and also dismissive of global health and animal welfare people, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
Most of the sub-cause areas that I know about that have been defunded are animal welfare priorities. Things like insect suffering and wild animal welfare are two of the sub-cause areas that are getting defunded, which I both considered to be among the more important animal welfare priorities (due to their extreme neglectedness). I am not being dismissive of either global health or animal welfare people, they are being affected by this just as much (I know less about global health, and my sense is the impact of these changes are less bad there, but I still expect a huge negative chilling effect on people trying to think carefully about the issues around global health).
Specifically with digital minds I still disagree that it’s a super unlikely area to be as PR risk. To me it seems easier than other areas to take aim at, the few people I’ve talked to about it find it more objectionable than other EA stuff I’ve talked about. and there seems to me some prior as it could be associated with other long termist EA work that has already taken PR hits.
Thanks for the clarification about the defunded areas I just assumed it was only long termist areas defunded my bad I got that wrong. Have corrected my reply.
Would be good to see an actual list of the defunded areas...
First I feel like you are conflating 2 issues here. You start and finish by talking about PR, but in the middle you argue the important of the future I think it’s important to separate these two issues to avoid confusion, I’ll just discuss the PR angle
I disagree and think there’s a smallish but significant risk of PR badness here. From my experience talking to even my highly educated friends who aren’t into EA, they find it very strange that money is invested into researching the welfare of future AI minds at all and often flat out disagree that money should be spent on that. That indicates to me (weakly from anecdata) that there is at least some PR risk here.
I also think there are pretty straightforward framings like “millions poured into welfare of robot minds which don’t Even exist yet” which could certainly be bad for PR. If I were anti EA I could write a pretty good hit piece about rich people in silicon valley prioritizing their digital mind AI hobby horse ahead of millions of real minds that are suffering right now.
What are your grounds for thinking that this has a almost insignificant chance of being “PR costly”?
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of the many working in areas not defunded, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
“defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem,”
That said I probably do agree with this...
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path”
But don’t want to conflate that with the PR risk....
For what it’s worth, as a minor point, the animal welfare issues I think are most important, and the interventions I suspect are the most cost-effective right now (e.g. shrimp stunning), are basically only fundable because of EA being weird in the past and willing to explore strange ideas. I think some of this does entail genuine PR risk in certain ways, but I don’t think we would have gotten most of the most valuable progress that EA has made for animal welfare if we paid attention to PR between 2010 and 2021, and the animal welfare space would be much worse off. That doesn’t mean PR shouldn’t be a consideration now, but as a historical matter, I think it is correct that impact in the animal space has largely been driven by comfort with weird ideas. I think the new funding environment is likely a lot worse for making meaningful progress on the most important animal welfare issues.
The “non-weird” animal welfare ideas that are funded right now (corporate chicken campaigns and alternative proteins?) were not EA innovations and were already being pursued by non-EA animal groups when EA funding entered the space. If these are the best interventions OpenPhil can fund due to PR concerns, animals are a lot worse off.
I personally would rather more animal and global health groups distanced themselves from EA if there were PR risks, than EA distancing itself from PR risks. It seems like groups could just make determinations about the right strategies for their own work with regard to PR, instead of there being top down enforcement of a singular PR strategy, which I think is likely what this change will mostly cause. E.g. I think that the EA-side origins of wild animal welfare work are highly risky from a PR angle, but the most effective implementation of them, WAI, both would not have occurred without that PR risky work (extremely confident), and is now exceedingly normal / does not pose a PR risk to EA at all (fairly confident) nor does EA pose one to it (somewhat confident). It just reads as just a normal wild animal scientific research group to basically any non-EA who engages with it.
Thanks for the reply! I wasnt actually aware that animal welfare has run into major PR issues. I didn’t think the public took much interest in wild animal or shrimp welfare. I probably missed it but would be interested to see the articles / hit pieces.
I don’t think how “weird” something is necessarily correlates to PR risk. It’s definitely a factor but there are others too. For example buying Wytham Abbey wasn’t weird, but appeared to many in the public at least inconsistent with EA values.
I don’t think these areas have run into PR issues historically, but they are perceived as PR risks.
I agree that I make two separate points. I think evaluating digital sentience seems pretty important from a “try to be a moral person” perspective, and separately, I think it’s just a very reasonable and straightforward question to ask that I expect smart people to be interested in and where smart people will understand why someone might want to do research on this question. Like, sure, you can frame everything in some horribly distorting way, and find some insult that’s vaguely associated with that framing, but I don’t think that’s very predictive of actual reputational risk.
Most of the sub-cause areas that I know about that have been defunded are animal welfare priorities. Things like insect suffering and wild animal welfare are two of the sub-cause areas that are getting defunded, which I both considered to be among the more important animal welfare priorities (due to their extreme neglectedness). I am not being dismissive of either global health or animal welfare people, they are being affected by this just as much (I know less about global health, and my sense is the impact of these changes are less bad there, but I still expect a huge negative chilling effect on people trying to think carefully about the issues around global health).
Specifically with digital minds I still disagree that it’s a super unlikely area to be as PR risk. To me it seems easier than other areas to take aim at, the few people I’ve talked to about it find it more objectionable than other EA stuff I’ve talked about. and there seems to me some prior as it could be associated with other long termist EA work that has already taken PR hits.
Thanks for the clarification about the defunded areas I just assumed it was only long termist areas defunded my bad I got that wrong. Have corrected my reply.
Would be good to see an actual list of the defunded areas...