Hi Fai, I appreciate your disagreements. Regarding the tractability of helping digital minds, I participated in CLR’s six week S-risk Intro Fellowship and I thought that their stuff is quite promising. For example, many digital minds s-risks come from an advanced AI that could be developed soon. CLR has connections with some of the organizations that might develop advanced AI. So it seems plausible to me that they could reduce s-risks by impacting how AI is developed. You can see CLR’s ideas on how to influence the development of AI to reduce s-risks in their publications page [edit 2023-02-21: actually, I’m unsure if it is easy to learn their ideas from that page, this is not how I learnt them, so I regret mentioning it]. Some of their other stuff seems promising to me too. I don’t see such powerful levers for animals in longtermism, perhaps you can convince me otherwise. I am not familiar with the work of SI. (I’ll address your other points separately)
Hi Fai, I appreciate your disagreements. Regarding the tractability of helping digital minds, I participated in CLR’s six week S-risk Intro Fellowship and I thought that their stuff is quite promising. For example, many digital minds s-risks come from an advanced AI that could be developed soon. CLR has connections with some of the organizations that might develop advanced AI. So it seems plausible to me that they could reduce s-risks by impacting how AI is developed. You can see CLR’s ideas on how to influence the development of AI to reduce s-risks in their publications page [edit 2023-02-21: actually, I’m unsure if it is easy to learn their ideas from that page, this is not how I learnt them, so I regret mentioning it]. Some of their other stuff seems promising to me too. I don’t see such powerful levers for animals in longtermism, perhaps you can convince me otherwise. I am not familiar with the work of SI. (I’ll address your other points separately)