Hey Wei_Dai, thanks for this feedback! I agree that philosophers can be useful in alignment research by way of working on some of the philosophical questions you list in the linked post. Insofar as you’re talking about working on questions like those within academia, I think of that as covered by the suggestion to work on global priorities research. For instance, I know that working on some of those questions would be welcome at the Global Priorities Institute, and I think FHI would probably also welcome philosophers working on AI questions. But I agree that that isn’t clear from the article, and I’ve added a bit to clarify it.
But maybe the suggestion is working on those questions outside academia. We mention DeepMind and Open AI as having ethics divisions, but likely only some philosophical questions relevant to AI safely are done in those kinds of centers, and it could be worth listing more non-academic settings in which philosophers might be able to pursue alignment relevant questions. There are, for instance, lots of AI ethics organizations, though most are only focused on short-term issues, and are more concerned with ‘implications’ than with philosophical questions that arise in the course of design. CHAI, AI Impacts, the Leverhulme center, and MIRI also seem to do a bit of philosophy each. The future Schwarzman Center at Oxford may also be a good place for this once it gets going. I’ve edited the relevant sections to reflect this.
Do you know of any other projects or organizations that might be useful to mention? I also think your list of philosophy questions relevant to AI is useful—thanks for writing it up!-- and would like to link to it in the article.
As for the comparison with journalism and AI policy, in line with what Will wrote below I was thinking of those as suggestions for people who are trying to get out of philosophy or who will be deciding not to go into it in the first place, i.e., for people who would be good at philosophy but who choose to do something else that takes advantage of their general strengths.
Thanks for making the changes. I think they address most of my concerns. However I think splitting the AI safety organizations mentioned between academic and non-academic is suboptimal, because it seems like what’s most important is that someone who can contribute to AI safety go to an organization that can use them, whether that organization belongs to a university or not. On a pragmatic level, I’m worried that someone sees a list of organizations where they can contribute to AI safety, and not realize that there’s another list in a distant part of the article.
Do you know of any other projects or organizations that might be useful to mention?
Individual grants from various EA sources seem worth mentioning. I would also suggest mentioning FHI for AI safety research, not just global priorities research.
As for the comparison with journalism and AI policy, in line with what Will wrote below I was thinking of those as suggestions for people who are trying to get out of philosophy or who will be deciding not to go into it in the first place, i.e., for people who would be good at philosophy but who choose to do something else that takes advantage of their general strengths.
Ok, that wasn’t clear to me, as there’s nothing in the text that explicitly says those suggestions are for people who are trying to get out of philosophy. Instead the opening of that section says “If you want to leave academia”. I think you can address this as well as my “splitting” concern above by reorganizing the article into “careers inside philosophy” and “careers outside philosophy” instead of “careers inside academia” and “careers outside academia”. (But it’s just a suggestion as I’m sure you have other considerations for how to organize the article.)
Re: these being alternatives to philosophy, I see what you mean. But I think it’s ok to group together non-academic philosophy and non-philosophy alternatives since it’s a career review of philosophy academia. However, I take the point that I can better connect the two ‘alternatives’ sections in the article and have added a link.
As for individual grants, I’m hesitant to add that suggestion because I worry that that would encourage some people people who aren’t able to get philosophy roles in academia or in other organizations to go the ‘independent’ route, and I think that will rarely be the right choice.
As for individual grants, I’m hesitant to add that suggestion because I worry that that would encourage some people people who aren’t able to get philosophy roles in academia or in other organizations to go the ‘independent’ route, and I think that will rarely be the right choice.
I’m interested to hear why you think that. My own thinking is that a typical AI safety research organization may not currently be very willing to hire someone with mainly philosophy background, so they may have to first prove their value by doing some AI safety related independent research. After that they can either join a research org or continue down the ‘independent’ route if it seems suitable to them. Does this not seem like a good plan?
Hey Wei_Dai, thanks for this feedback! I agree that philosophers can be useful in alignment research by way of working on some of the philosophical questions you list in the linked post. Insofar as you’re talking about working on questions like those within academia, I think of that as covered by the suggestion to work on global priorities research. For instance, I know that working on some of those questions would be welcome at the Global Priorities Institute, and I think FHI would probably also welcome philosophers working on AI questions. But I agree that that isn’t clear from the article, and I’ve added a bit to clarify it.
But maybe the suggestion is working on those questions outside academia. We mention DeepMind and Open AI as having ethics divisions, but likely only some philosophical questions relevant to AI safely are done in those kinds of centers, and it could be worth listing more non-academic settings in which philosophers might be able to pursue alignment relevant questions. There are, for instance, lots of AI ethics organizations, though most are only focused on short-term issues, and are more concerned with ‘implications’ than with philosophical questions that arise in the course of design. CHAI, AI Impacts, the Leverhulme center, and MIRI also seem to do a bit of philosophy each. The future Schwarzman Center at Oxford may also be a good place for this once it gets going. I’ve edited the relevant sections to reflect this.
Do you know of any other projects or organizations that might be useful to mention? I also think your list of philosophy questions relevant to AI is useful—thanks for writing it up!-- and would like to link to it in the article.
As for the comparison with journalism and AI policy, in line with what Will wrote below I was thinking of those as suggestions for people who are trying to get out of philosophy or who will be deciding not to go into it in the first place, i.e., for people who would be good at philosophy but who choose to do something else that takes advantage of their general strengths.
Thanks for making the changes. I think they address most of my concerns. However I think splitting the AI safety organizations mentioned between academic and non-academic is suboptimal, because it seems like what’s most important is that someone who can contribute to AI safety go to an organization that can use them, whether that organization belongs to a university or not. On a pragmatic level, I’m worried that someone sees a list of organizations where they can contribute to AI safety, and not realize that there’s another list in a distant part of the article.
Individual grants from various EA sources seem worth mentioning. I would also suggest mentioning FHI for AI safety research, not just global priorities research.
Ok, that wasn’t clear to me, as there’s nothing in the text that explicitly says those suggestions are for people who are trying to get out of philosophy. Instead the opening of that section says “If you want to leave academia”. I think you can address this as well as my “splitting” concern above by reorganizing the article into “careers inside philosophy” and “careers outside philosophy” instead of “careers inside academia” and “careers outside academia”. (But it’s just a suggestion as I’m sure you have other considerations for how to organize the article.)
Re: these being alternatives to philosophy, I see what you mean. But I think it’s ok to group together non-academic philosophy and non-philosophy alternatives since it’s a career review of philosophy academia. However, I take the point that I can better connect the two ‘alternatives’ sections in the article and have added a link.
As for individual grants, I’m hesitant to add that suggestion because I worry that that would encourage some people people who aren’t able to get philosophy roles in academia or in other organizations to go the ‘independent’ route, and I think that will rarely be the right choice.
I’m interested to hear why you think that. My own thinking is that a typical AI safety research organization may not currently be very willing to hire someone with mainly philosophy background, so they may have to first prove their value by doing some AI safety related independent research. After that they can either join a research org or continue down the ‘independent’ route if it seems suitable to them. Does this not seem like a good plan?