Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a “strong axiological longtermist.” Would that be a fair statement?
Also, although it took some time, I’ve met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It’s just that they don’t publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionaire donors like Jaan Tallinn and Vitalik Buterin. As a result, there’s the illusion that AI Safety is dominated by EA-trained philosophers and engineers.
Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a “strong axiological longtermist.” Would that be a fair statement?
Also, although it took some time, I’ve met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It’s just that they don’t publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionaire donors like Jaan Tallinn and Vitalik Buterin. As a result, there’s the illusion that AI Safety is dominated by EA-trained philosophers and engineers.