One way to affect things is to increase the probability that humanity ends up building a healthy and philosophically competent civilization. (But we already knew that was important.)
Do you know anyone who is actually working on this, especially the second part (philosophical competence)? I’ve been thinking about this myself, and wrote some LW posts on the topic. (In short, my main message is that if we care about our collective philosophical competence, the AI transition represents both a high risk and a unique opportunity.) But I feel like my public and private efforts to attract more attention and work to this area haven’t yielded much. Do you see things differently?
Do you know anyone who is actually working on this, especially the second part (philosophical competence)? I’ve been thinking about this myself, and wrote some LW posts on the topic. (In short, my main message is that if we care about our collective philosophical competence, the AI transition represents both a high risk and a unique opportunity.) But I feel like my public and private efforts to attract more attention and work to this area haven’t yielded much. Do you see things differently?