One crux is that I’m worried that broad field-building mostly recruits people to work on stuff like “are AIs conscious” and “how can we improve short-term AI welfare” rather than “how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with.” So the field-building feels approximately zero-value to me — I doubt you’ll be able to steer people toward the important stuff in the future.
A smaller crux is that I’m worried about lab-facing work similarly being poorly aimed.
Oh, clarification: it’s very possible that there aren’t great grant opportunities by my lights. It’s not like I’m aware of great opportunities that the other Zach isn’t funding. I should have focused more on expected grants than Zach’s process.
I find this distinction kind of odd. If we care about what digital minds we produce in the future, what should we be doing now?
I expect that what minds we build in large numbers in the future will be largely depend on how we answer a political question. The best way to prepare now for influencing how we as a society answer that question (in a positive way) is to build up a community with a reputation for good research, figure out the most important cruxes and what we should say about them, create a better understanding of what we should actually be aiming for, initiate valuable relationships with potential stakeholders based on mutual respect and trust, creating basic norms about human-ai relationships, and so on. To me, that looks like engaging with whether near-future AIs are conscious (or have other morally important traits) and working with stakeholders to figure out what policies make sense at what times.
Though I would have thought the posts you highlighted as work you’re more optimistic about fit squarely within that project, so maybe I’m misunderstanding you.
I’m not sure what we should be doing now! But I expect that people can make progress if they backchain from the von Neumann probes, whereas my impression is that most people entering the “digital sentience” space never think about the von Neumann probes.
Thanks. I’m somewhat glad to hear this.
One crux is that I’m worried that broad field-building mostly recruits people to work on stuff like “are AIs conscious” and “how can we improve short-term AI welfare” rather than “how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with.” So the field-building feels approximately zero-value to me — I doubt you’ll be able to steer people toward the important stuff in the future.
A smaller crux is that I’m worried about lab-facing work similarly being poorly aimed.
Oh, clarification: it’s very possible that there aren’t great grant opportunities by my lights. It’s not like I’m aware of great opportunities that the other Zach isn’t funding. I should have focused more on expected grants than Zach’s process.
I find this distinction kind of odd. If we care about what digital minds we produce in the future, what should we be doing now?
I expect that what minds we build in large numbers in the future will be largely depend on how we answer a political question. The best way to prepare now for influencing how we as a society answer that question (in a positive way) is to build up a community with a reputation for good research, figure out the most important cruxes and what we should say about them, create a better understanding of what we should actually be aiming for, initiate valuable relationships with potential stakeholders based on mutual respect and trust, creating basic norms about human-ai relationships, and so on. To me, that looks like engaging with whether near-future AIs are conscious (or have other morally important traits) and working with stakeholders to figure out what policies make sense at what times.
Though I would have thought the posts you highlighted as work you’re more optimistic about fit squarely within that project, so maybe I’m misunderstanding you.
I’m not sure what we should be doing now! But I expect that people can make progress if they backchain from the von Neumann probes, whereas my impression is that most people entering the “digital sentience” space never think about the von Neumann probes.