I think a key crux here is whether you think AI timelines are short or long. If they’re short, there’s more pressure to focus on immediately applicable work. If they’re long, then there’s more benefit to having philosophers develop ideas which gradually trickle down.
In PIBBSS, we’ve had a mentor note that for alignment to go well, we need more philosophers working on foundational issues in AI rather than more prosaic researchers. I found that interesting, and I currently believe that this is true. Even in short-timeline worlds, we need to figure out some philosophy FAST.
I think a key crux here is whether you think AI timelines are short or long. If they’re short, there’s more pressure to focus on immediately applicable work. If they’re long, then there’s more benefit to having philosophers develop ideas which gradually trickle down.
In PIBBSS, we’ve had a mentor note that for alignment to go well, we need more philosophers working on foundational issues in AI rather than more prosaic researchers. I found that interesting, and I currently believe that this is true. Even in short-timeline worlds, we need to figure out some philosophy FAST.