Fantastic post! It’s a significant upgrade from the “terminal/instrumental values” mental model I was previously using.
When I first joined EA, I looked at the annual survey of EAs and was surprised to see so much variation in how EAs ranked the importance of the major causes. I thought that the group would be moving towards a consensus, and that each individual member would be able to trace their actions up towards their understanding of the most important causes.
Personally, I tried to build up my own understanding of the cause priority from strong foundations, doing my best to answer meta questions like “do I value all people equally”, “how do I weight animal suffering vs human happiness”. From there, I worked my way down the V2ADC, trying to meta-analyze the research on causes, eventually coming to an area that I felt confident was the best place to add value.
I think with a bit more nuance, the EA survey could serve as a good feedback mechanism to see where on the chain we all see ourselves, and to see if the sum of the parts adds up to anything resembling a consistent whole. Will the EA community end up converging in beliefs and strategy? Is it an elephant in the room to say that half of the people working on X cause aught to shift to Y cause because the people up the chain are confident that it is a better move for the community? Even if the exploratory folks at the bottom raised their evidence up the chain, would we have enough corrigibility to pivot? (Love that word, totally gonna use it more!)
I’m glad that this is a more powerful tool for you.
And kudos for working things from the foundations up! Personally, I still need to take a few hours with a pen and paper to systematically work myself through the decision chain myself. A friend has been nudging me to do that. :-)
Gregory Lewis makes the argument above that some EAs are moving in the direction of working on long term future work and few are moving back out. I’m inclined to agree with him that they probably have good reasons for that.
I’d also love to see the results of some far mode — near mode questions put in the EA Survey or perhaps send out by Spencer Greenberg (not sure if there’s an existing psychological scale to gauge how much people are in each mode when working throughout the day). And of course, how they corellate with cause area preferences.
Max Dalton explained to me how ‘corrigiblity’ was one of the most important traits to look for for selecting people you want to work with at EA Global London last year so credit to him. :-) My contribution here is adding the distinction that people often seem more corrigible at some levels than others, especially when they’re new to the community.
(also, I love that sentence – “if the exploratory folks at the bottom raised evidence up the chain...”)
Fantastic post! It’s a significant upgrade from the “terminal/instrumental values” mental model I was previously using.
When I first joined EA, I looked at the annual survey of EAs and was surprised to see so much variation in how EAs ranked the importance of the major causes. I thought that the group would be moving towards a consensus, and that each individual member would be able to trace their actions up towards their understanding of the most important causes.
Personally, I tried to build up my own understanding of the cause priority from strong foundations, doing my best to answer meta questions like “do I value all people equally”, “how do I weight animal suffering vs human happiness”. From there, I worked my way down the V2ADC, trying to meta-analyze the research on causes, eventually coming to an area that I felt confident was the best place to add value.
I think with a bit more nuance, the EA survey could serve as a good feedback mechanism to see where on the chain we all see ourselves, and to see if the sum of the parts adds up to anything resembling a consistent whole. Will the EA community end up converging in beliefs and strategy? Is it an elephant in the room to say that half of the people working on X cause aught to shift to Y cause because the people up the chain are confident that it is a better move for the community? Even if the exploratory folks at the bottom raised their evidence up the chain, would we have enough corrigibility to pivot? (Love that word, totally gonna use it more!)
Hi @Naryan,
I’m glad that this is a more powerful tool for you.
And kudos for working things from the foundations up! Personally, I still need to take a few hours with a pen and paper to systematically work myself through the decision chain myself. A friend has been nudging me to do that. :-)
Gregory Lewis makes the argument above that some EAs are moving in the direction of working on long term future work and few are moving back out. I’m inclined to agree with him that they probably have good reasons for that.
I’d also love to see the results of some far mode — near mode questions put in the EA Survey or perhaps send out by Spencer Greenberg (not sure if there’s an existing psychological scale to gauge how much people are in each mode when working throughout the day). And of course, how they corellate with cause area preferences.
Max Dalton explained to me how ‘corrigiblity’ was one of the most important traits to look for for selecting people you want to work with at EA Global London last year so credit to him. :-) My contribution here is adding the distinction that people often seem more corrigible at some levels than others, especially when they’re new to the community.
(also, I love that sentence – “if the exploratory folks at the bottom raised evidence up the chain...”)