I think that this question will be better if it is framed not in terms of the EA community. This is because
The reasoning about the object level question involving timelines and different intervention strategies is very interesting in itself, and there’s no need to add the layer of understanding what the community is doing and how practically it could and should adjust.
Signal boosting a norm of focusing less on intra-movement prioritization and more on personal or marginal additional prioritization and object-level questions.
For example, I like Dylan’s reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Whether it is true or not depends on the community and the point I’m making is primarily for EAs (and EA-adjacent people too). It might also be true for the AI safety and governance communities. I don’t think it is true in general though — i.e. most citizens and most politicians are not giving too little regard to long timelines. So I’m not sure the point can be made when removing this reference.
Also, I’m particularly focusing on the set of people who are trying to act rationally and altruistically in response to these dangers, and are doing so in a somewhat coordinated manner. e.g. a key aspect is that the portfolio is currently skewed towards the near-term.
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I’d like people in the community to think less about the community should think. Err.. oops..
I think that this question will be better if it is framed not in terms of the EA community. This is because
The reasoning about the object level question involving timelines and different intervention strategies is very interesting in itself, and there’s no need to add the layer of understanding what the community is doing and how practically it could and should adjust.
Signal boosting a norm of focusing less on intra-movement prioritization and more on personal or marginal additional prioritization and object-level questions.
For example, I like Dylan’s reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Whether it is true or not depends on the community and the point I’m making is primarily for EAs (and EA-adjacent people too). It might also be true for the AI safety and governance communities. I don’t think it is true in general though — i.e. most citizens and most politicians are not giving too little regard to long timelines. So I’m not sure the point can be made when removing this reference.
Also, I’m particularly focusing on the set of people who are trying to act rationally and altruistically in response to these dangers, and are doing so in a somewhat coordinated manner. e.g. a key aspect is that the portfolio is currently skewed towards the near-term.
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I’d like people in the community to think less about the community should think. Err.. oops..