Interesting idea, thanks for doing this! I agree it’s good to have more approachable cause prioritization models, but there’re also associated risks to be careful about:
A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it’s spread out on the Internet it can’t be taken back.
A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; instead a more “democratic” way (not too sure what this means exactly) of making the model might be favored.
Views in EA are really diverse, so one single model likely cannot capture all of them.
Also, I think the decision-tree-style framework used here has some inherent drawbacks:
It’s unclear what “yes” and “no” means.
e.g. What does it mean to agree that “humans have special status”? This can be refering to many different positions (see below for examples) which probably lead to vastly different conclusions.
a. humans have two times higher moral weight than non-humans
b. all animals are morally weighted by their neuron count (or some non-linear function of neuron count)
c. human utility always trumps non-human utility
for another example, see alexrjl’s comment.
Yes-or-no answers usually don’t serve as necessary and sufficient conditions.
e.g. I think “most influential time in future” is neither necessary nor sufficient for prioritizing “investing for the future”.
e.g. I don’t think the combined condition “suffering-focused OR adding people is neutral OR future pessimism” serves as anything close to a necessary condition to prioritizing “improving quality of future”.
A more powerful framework than decision trees might be favored, though I’m not sure what a better alternative would be. One might want to look at ML models for candidates, but one thing to note is that there’s likely a tradeoff between expressiveness and interprettability.
And lastly:
In addition, some foundational assumptions common to EA are made, including a consequentialist view of ethics in which wellbeing is what has intrinsic value.
I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.
Thanks for this, you raise a number of useful points.
A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it’s spread out on the Internet it can’t be taken back.
I guess this risk could be mitigated by ensuring the model is frequently updated and includes disclaimers. I think this risk is faced by many EA orgs, for example 80,000 Hours, but that doesn’t stop them from publishing advice which they regularly update.
A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; instead a more “democratic” way (not too sure what this means exactly) of making the model might be favored.
I like that idea and I certainly don’t think my model is anywhere near final (it was just my preliminary attempt with no outside help!). There could be a process with engagement with prominent EAs to finalise a model.
Views in EA are really diverse, so one single model likely cannot capture all of them.
Also fair. However it seems that certain EA orgs such as 80,000 Hours do adopt certain views, naturally excluding other views (for which they have been criticised). Maybe it would make more sense for such a model to be owned by an org like 80,000 Hours which is open about their longtermist focus for example, rather than CEA which is supposed to represent EA as a whole.
e.g. What does it mean to agree that “humans have special status”? This can be refering to many different positions (see below for examples) which probably lead to vastly different conclusions.
As I said to alexjrl, my idea for a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple.
Yes-or-no answers usually don’t serve as necessary and sufficient conditions.
I don’t think a flowchart can be 100% prescriptive and final, there are too many nuances to consider. I just want it to raise key considerations for EAs to consider. For example, I think it would be fine for an EA to end up at a certain point in the flowchart and then think to themselves that they should actually choose a difference cause area because there is some nuance that the flowchart didn’t consider that means they ended up in the wrong place. That’s fine—but it would still be good to have systematic process in my opinion that ensures EAs consider some really key considerations.
e.g. I think “most influential time in future” is neither necessary nor sufficient for prioritizing “investing for the future”.
Feedback like this is useful and could lead to updating the flowchart itself. I have to say I’m not sure why the most influential time being in the future wouldn’t imply investing for that time though—I’d be interested to hear your reasoning.
I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.
Fair point. As I said before if an org like 80,000 Hours owned such a model perhaps they wouldn’t have to go beyond consequentialism. If CEA did I would suspect that they should.
Thanks for the reply, your points make sense! There is certainly a problem of “degree” to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn’t raising my points to dismiss your approach; Instead, they’re things that I think need to be taken care of, if we’re to take such approach.)
I have to say I’m not sure why the most influential time being in the future wouldn’t imply investing for that time though—I’d be interested to hear your reasoning.
Caveat: I haven’t spend much time thinking about this problem of investing vs direct work, so please don’t take my views too seriously. I should have made this clear in my original comment, my bad.
My first consideration is that we need to distinguish between “this century is more important than any given century in the future” and “this century is more important than all centuries in the future combined”. The latter argues strongly against investing for the future; But the former doesn’t seem to, as by investing now (patient philanthropy, movement building, etc.) you can potentially benefit many centuries to come.
The second consideration is that there’re many more factors than “how important this century is”. The need of the EA movement is one (and is a particularly important consideration for movement building), personal fit is another, among others.
Interesting idea, thanks for doing this! I agree it’s good to have more approachable cause prioritization models, but there’re also associated risks to be careful about:
A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it’s spread out on the Internet it can’t be taken back.
A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; instead a more “democratic” way (not too sure what this means exactly) of making the model might be favored.
Views in EA are really diverse, so one single model likely cannot capture all of them.
Also, I think the decision-tree-style framework used here has some inherent drawbacks:
It’s unclear what “yes” and “no” means.
e.g. What does it mean to agree that “humans have special status”? This can be refering to many different positions (see below for examples) which probably lead to vastly different conclusions.
a. humans have two times higher moral weight than non-humans
b. all animals are morally weighted by their neuron count (or some non-linear function of neuron count)
c. human utility always trumps non-human utility
for another example, see alexrjl’s comment.
Yes-or-no answers usually don’t serve as necessary and sufficient conditions.
e.g. I think “most influential time in future” is neither necessary nor sufficient for prioritizing “investing for the future”.
e.g. I don’t think the combined condition “suffering-focused OR adding people is neutral OR future pessimism” serves as anything close to a necessary condition to prioritizing “improving quality of future”.
A more powerful framework than decision trees might be favored, though I’m not sure what a better alternative would be. One might want to look at ML models for candidates, but one thing to note is that there’s likely a tradeoff between expressiveness and interprettability.
And lastly:
I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.
Thanks for this, you raise a number of useful points.
I guess this risk could be mitigated by ensuring the model is frequently updated and includes disclaimers. I think this risk is faced by many EA orgs, for example 80,000 Hours, but that doesn’t stop them from publishing advice which they regularly update.
I like that idea and I certainly don’t think my model is anywhere near final (it was just my preliminary attempt with no outside help!). There could be a process with engagement with prominent EAs to finalise a model.
Also fair. However it seems that certain EA orgs such as 80,000 Hours do adopt certain views, naturally excluding other views (for which they have been criticised). Maybe it would make more sense for such a model to be owned by an org like 80,000 Hours which is open about their longtermist focus for example, rather than CEA which is supposed to represent EA as a whole.
As I said to alexjrl, my idea for a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple.
I don’t think a flowchart can be 100% prescriptive and final, there are too many nuances to consider. I just want it to raise key considerations for EAs to consider. For example, I think it would be fine for an EA to end up at a certain point in the flowchart and then think to themselves that they should actually choose a difference cause area because there is some nuance that the flowchart didn’t consider that means they ended up in the wrong place. That’s fine—but it would still be good to have systematic process in my opinion that ensures EAs consider some really key considerations.
Feedback like this is useful and could lead to updating the flowchart itself. I have to say I’m not sure why the most influential time being in the future wouldn’t imply investing for that time though—I’d be interested to hear your reasoning.
Fair point. As I said before if an org like 80,000 Hours owned such a model perhaps they wouldn’t have to go beyond consequentialism. If CEA did I would suspect that they should.
Thanks for the reply, your points make sense! There is certainly a problem of “degree” to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn’t raising my points to dismiss your approach; Instead, they’re things that I think need to be taken care of, if we’re to take such approach.)
Caveat: I haven’t spend much time thinking about this problem of investing vs direct work, so please don’t take my views too seriously. I should have made this clear in my original comment, my bad.
My first consideration is that we need to distinguish between “this century is more important than any given century in the future” and “this century is more important than all centuries in the future combined”. The latter argues strongly against investing for the future; But the former doesn’t seem to, as by investing now (patient philanthropy, movement building, etc.) you can potentially benefit many centuries to come.
The second consideration is that there’re many more factors than “how important this century is”. The need of the EA movement is one (and is a particularly important consideration for movement building), personal fit is another, among others.