Reports like this make me seriously doubt whether I’m just selfishly prioritising AGI research because it’s more interesting, novel, higher-status, etc. I don’t think so, but the cost of being wrong is enormous.
I think it also depends on what the impact of AGI on farmed animals is.
If you are in a position where you can influence a deployment of AGI in a way that minimizes farmed animal suffering, or minimzes S-risks, then it can be very impactful.
Reports like this make me seriously doubt whether I’m just selfishly prioritising AGI research because it’s more interesting, novel, higher-status, etc. I don’t think so, but the cost of being wrong is enormous.
I think it also depends on what the impact of AGI on farmed animals is.
If you are in a position where you can influence a deployment of AGI in a way that minimizes farmed animal suffering, or minimzes S-risks, then it can be very impactful.
If solving alignment with human values only allows factory farming to continue for a long time, then this could have negative impact: https://www.forbes.com/sites/briankateman/2022/09/06/optimistic-longtermism-is-terrible-for-animals/?sh=328a115d2059