I’m trying to understand whether you mean one of two different propositions:
The more important this century is, the more important in absolute terms it is to raise the representativeness of EA (in this century).
That is, if we increase our probability that this is the most important century there is, we should expect more utility from increasing the representativeness of EA
The more important this century is, the more important in relative terms it is to raise the representativeness of EA (in this century).
That is, if we increase our probability that this is the most important century there is, this raises the relative importance of increasing the representativeness of EA, compared to the importance of other EA activities (e.g., recruiting scientific talent and researching ways to decrease existential risk)
The first propositions is a useful note, but does not by itself demand additional actions. The second proposition suggests that we should change our prioritizations. I think your post here is arguing for the second proposition, but I’m not sure and would like to get some clarity.
Ah good distinction! Agree I was not clear on that in my post (and to be honest, my thinking on it wasn’t very clear either before you pointed out this distinction).
In part I am arguing for proposition 2. If it is the most important century, all long-term causes become more important relative to near-term causes. So at the very least, if it is the most important century, raising the representativeness of EA increases in importance relative to e.g., distributing bednets (1).
But what I’m really arguing for, is that representativeness is more important for long-termism than most people in EA seem to think it is. And if you were underrating the importance of raising EA’s representativeness (as I think the EA community does), additional action is demanded. I look through the lens of “if this is the most important century, representativeness is urgent” to illustrate the point.
I could as have well, and maybe more accurately, called this article “An long-termist argument for the importance of EA’s representativeness based on values-lock-in”
1.
I think it’s a thornier question when it comes to whether or not raising the representativeness of EA becomes more important relative to other long-term cause areas. The answer here would depend on the timeline of different long-termist issues, and the degree of lock-in each of them have.
Lock-in: If lock-in is stronger in decisions driven by value judgments than in decisions driven by scientific understanding, then representativeness increases in importance relative to recruiting scientific talent. Or the converse
Timelines: Imagine that in an “EA business as usual” approach (e.g., not the most important century) it takes 30 years to attract the best scientific talent and 300 years to make EA representative. But in a “most important century approach” it takes 10 years to attract the best talent, and 10 years to make EA representative. Then “making EA representative” has likely increased in importance relative to “attracting the best scientific talent” as a result of it being the most important century. (My sense is that something like this is the case)
I don’t have a strong view on this, and it could make for some interesting analysis!
I’m trying to understand whether you mean one of two different propositions:
The more important this century is, the more important in absolute terms it is to raise the representativeness of EA (in this century).
That is, if we increase our probability that this is the most important century there is, we should expect more utility from increasing the representativeness of EA
The more important this century is, the more important in relative terms it is to raise the representativeness of EA (in this century).
That is, if we increase our probability that this is the most important century there is, this raises the relative importance of increasing the representativeness of EA, compared to the importance of other EA activities (e.g., recruiting scientific talent and researching ways to decrease existential risk)
The first propositions is a useful note, but does not by itself demand additional actions. The second proposition suggests that we should change our prioritizations. I think your post here is arguing for the second proposition, but I’m not sure and would like to get some clarity.
Ah good distinction! Agree I was not clear on that in my post (and to be honest, my thinking on it wasn’t very clear either before you pointed out this distinction).
In part I am arguing for proposition 2. If it is the most important century, all long-term causes become more important relative to near-term causes. So at the very least, if it is the most important century, raising the representativeness of EA increases in importance relative to e.g., distributing bednets (1).
But what I’m really arguing for, is that representativeness is more important for long-termism than most people in EA seem to think it is. And if you were underrating the importance of raising EA’s representativeness (as I think the EA community does), additional action is demanded. I look through the lens of “if this is the most important century, representativeness is urgent” to illustrate the point.
I could as have well, and maybe more accurately, called this article “An long-termist argument for the importance of EA’s representativeness based on values-lock-in”
1.
I think it’s a thornier question when it comes to whether or not raising the representativeness of EA becomes more important relative to other long-term cause areas. The answer here would depend on the timeline of different long-termist issues, and the degree of lock-in each of them have.
Lock-in: If lock-in is stronger in decisions driven by value judgments than in decisions driven by scientific understanding, then representativeness increases in importance relative to recruiting scientific talent. Or the converse
Timelines: Imagine that in an “EA business as usual” approach (e.g., not the most important century) it takes 30 years to attract the best scientific talent and 300 years to make EA representative. But in a “most important century approach” it takes 10 years to attract the best talent, and 10 years to make EA representative. Then “making EA representative” has likely increased in importance relative to “attracting the best scientific talent” as a result of it being the most important century. (My sense is that something like this is the case)
I don’t have a strong view on this, and it could make for some interesting analysis!