I am a researcher at Rethink Priorities’ Worldview Investigations Team. I also do work for Oxford’s Global Priorities Institute. Previously I was a research analyst at the Forethought Foundation for Global Priorities Research. I took the role after completing the MPhil in Economics at Oxford University. Before that, I studied Mathematics and Philosophy at the University of St Andrews.
Find out more about me here.
I agree with you Oscar, and we’ve highlighted this in the summary table where I borrowed your ‘contrasting project preferences’ terminology. Still, I think it could still be worth drawing the conceptual distinctions because it might help identify places where bargains can occur.
I liked your example too! We tried to add a few (GCR-focused agent believes AI advances are imminent, while a GHD agent is skeptical; AI safety view borrows resources from a Global Health to fund urgent AI research; meat-eater; gun rights and another supporting gun control both fund a neutral charity like Oxfam...) but we could have done better in highlighting them. I’ve also added these to the table.
I found your last mathematical note a bit confusing because I originally read A,B,C as projects they might each support. But if it’s outcomes (i.e. pairs of projects they would each support), then I think I’m with you!