Many people were interested in how they could contribute. However, often they were more interested in reframing their specific topic to sound more like AI safety rather than making substantial changes to their research.
As stated, this doesn’t sound like wanting to contribute to me.
I definitely agree that it’s usually not very contributive. But for most people who lived and breathed their personal research area for 5-20 years and just had AI safety explained to them 5-20 minutes ago, finding ways to connect their specialty for AI safety could feel like genuinely wanting to contribute. It’s habitual to take your knowledge and try to make it helpful to the person in front of you, especially in the very first conversation with them about their problem.
I think it’s a process and just takes a bit of time. What I mean is roughly “People at some point agreed that there is a problem and asked what could be done to solve it. Then, often they followed up with ‘I work on problem X, is there something I could do?’. And then some of them tried to frame their existing research to make it sound more like AI safety. However, if you point that out, they might consider other paths of contributing more seriously. I expect most people to not make substantial changes to their research though. Habits and incentives are really strong drivers”.
As stated, this doesn’t sound like wanting to contribute to me.
I definitely agree that it’s usually not very contributive. But for most people who lived and breathed their personal research area for 5-20 years and just had AI safety explained to them 5-20 minutes ago, finding ways to connect their specialty for AI safety could feel like genuinely wanting to contribute. It’s habitual to take your knowledge and try to make it helpful to the person in front of you, especially in the very first conversation with them about their problem.
I think it’s a process and just takes a bit of time. What I mean is roughly “People at some point agreed that there is a problem and asked what could be done to solve it. Then, often they followed up with ‘I work on problem X, is there something I could do?’. And then some of them tried to frame their existing research to make it sound more like AI safety. However, if you point that out, they might consider other paths of contributing more seriously. I expect most people to not make substantial changes to their research though. Habits and incentives are really strong drivers”.