On the point of non-EA funders coming into the space, it’s important to consider messaging -we don’t want to come off as being alarmist, overly patronizing or too sure certain of ourselves, but rather as a constructive dialogue that builds some shared understanding of the stakes involved.
There also needs to be incentive alignment, which in the short-term might also mean collaborating with people on things that aren’t directly X-risk related, like promoting ethical AI, enhancing transparency in AI development, etc.
+1 on not being alarmist, overly patronizing, or too certain of ourselves. And to be clear I think this is about more than messaging! Also, agree that we need to be able to collaborate with people who have different prioritise, but think it is important with integrity, prioritising and not giving up too much.
Nice summarization post!
On the point of non-EA funders coming into the space, it’s important to consider messaging -we don’t want to come off as being alarmist, overly patronizing or too sure certain of ourselves, but rather as a constructive dialogue that builds some shared understanding of the stakes involved.
There also needs to be incentive alignment, which in the short-term might also mean collaborating with people on things that aren’t directly X-risk related, like promoting ethical AI, enhancing transparency in AI development, etc.
+1 on not being alarmist, overly patronizing, or too certain of ourselves. And to be clear I think this is about more than messaging! Also, agree that we need to be able to collaborate with people who have different prioritise, but think it is important with integrity, prioritising and not giving up too much.