AIM simply doesn’t rate AI safety as a priority cause area. It’s not any particular organisation’s job to work on your favourite cause area. They are allowed to have a different prioritisation from you.
I think Yanni isn’t writing about personal favourites. Assuming there is such a thing as objective truth, it makes sense to discuss cause prioritization as an objective question.
Hmmm, I think the fact that you felt this was worth pointing out AND that people upvoted it, means that I haven’t made my point clear. My major concern is that there are things known about the challenges that come with incubating longtermist orgs that aren’t being discussed openly.
I think AIM doesn’t constitute evidence for this. Your top hypothesis should be that they don’t think AI safety is that good of a cause area, before positing the more complicated explanation. I say this partly based on interacting with people who have worked at AIM.
AIM simply doesn’t rate AI safety as a priority cause area. It’s not any particular organisation’s job to work on your favourite cause area. They are allowed to have a different prioritisation from you.
I think Yanni isn’t writing about personal favourites. Assuming there is such a thing as objective truth, it makes sense to discuss cause prioritization as an objective question.
Hmmm, I think the fact that you felt this was worth pointing out AND that people upvoted it, means that I haven’t made my point clear. My major concern is that there are things known about the challenges that come with incubating longtermist orgs that aren’t being discussed openly.
Maybe I misunderstood you.
I think AIM doesn’t constitute evidence for this. Your top hypothesis should be that they don’t think AI safety is that good of a cause area, before positing the more complicated explanation. I say this partly based on interacting with people who have worked at AIM.