I’m a grantmaker at Longview and manage the Digital Sentience Fund—thought I’d share my thinking here: “backchaining from… making the long-term future go well conditional on no Al takeover” is my goal with the fund (with the restriction of being related to the wellbeing of AIs in a somewhat direct way), though we might disagree on how that’s best achieved through funding. Specifically, the things you’re excited about would probably be toward the top of the list of things I’m excited about, but I also think broader empirical and philosophical work and field-building are some of the best ways to get there.
Relative to Lukas’s post, I’d say my goals are, in order, 5 and 2, then 4, then 3 and 1. An additional goal is improving the design of models that might be sticky over the long term.
All of the things on those lists require technical and policy researchers, engineers, lawyers, etc. that basically don’t currently exist in large numbers, so I do think fairly broad field building is important. There are pretty tight limits to how targeted field building can be: you can target, e.g., ML versus law, and you can suggest topics, but you’re basically just creating new specialists in the fields you pick who then pursue the topics you want.
Our recent funding opportunities targeted ML and neuroscience, so more around understanding minds than things like the role in society and trade. I’d guess that we repeat this but also run one or add in suggested topics that focus more on law, trade, etc.
Realistically addressing many of the things on those lists likely also requires a mature field of understanding AI minds, so I think empirical and philosophical work on sentience feeds into it.
To get concrete, recent distribution of funds and donations we’ve advised on (which is a decent approximation of where things go) looks like ~50% field building, of which maybe 10% is on things like the role of AI minds in society (including, e.g., trade) vs understanding AI minds; 40% research, of which maybe 25% is on the role of AI minds in society; a bit more than 10% lab-facing work, and 10% other miscellaneous things like communications and preparatory policy work. Generally the things I’m most excited to grow are lab-facing work and work on the role of AI minds in society.
(I also care about “averting Al takeover” and factor that in, though it’s not the main goal and gets less weight.)
One crux is that I’m worried that broad field-building mostly recruits people to work on stuff like “are AIs conscious” and “how can we improve short-term AI welfare” rather than “how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with.” So the field-building feels approximately zero-value to me — I doubt you’ll be able to steer people toward the important stuff in the future.
A smaller crux is that I’m worried about lab-facing work similarly being poorly aimed.
Oh, clarification: it’s very possible that there aren’t great grant opportunities by my lights. It’s not like I’m aware of great opportunities that the other Zach isn’t funding. I should have focused more on expected grants than Zach’s process.
I find this distinction kind of odd. If we care about what digital minds we produce in the future, what should we be doing now?
I expect that what minds we build in large numbers in the future will be largely depend on how we answer a political question. The best way to prepare now for influencing how we as a society answer that question (in a positive way) is to build up a community with a reputation for good research, figure out the most important cruxes and what we should say about them, create a better understanding of what we should actually be aiming for, initiate valuable relationships with potential stakeholders based on mutual respect and trust, creating basic norms about human-ai relationships, and so on. To me, that looks like engaging with whether near-future AIs are conscious (or have other morally important traits) and working with stakeholders to figure out what policies make sense at what times.
Though I would have thought the posts you highlighted as work you’re more optimistic about fit squarely within that project, so maybe I’m misunderstanding you.
I’m not sure what we should be doing now! But I expect that people can make progress if they backchain from the von Neumann probes, whereas my impression is that most people entering the “digital sentience” space never think about the von Neumann probes.
I’m a grantmaker at Longview and manage the Digital Sentience Fund—thought I’d share my thinking here: “backchaining from… making the long-term future go well conditional on no Al takeover” is my goal with the fund (with the restriction of being related to the wellbeing of AIs in a somewhat direct way), though we might disagree on how that’s best achieved through funding. Specifically, the things you’re excited about would probably be toward the top of the list of things I’m excited about, but I also think broader empirical and philosophical work and field-building are some of the best ways to get there.
Relative to Lukas’s post, I’d say my goals are, in order, 5 and 2, then 4, then 3 and 1. An additional goal is improving the design of models that might be sticky over the long term.
All of the things on those lists require technical and policy researchers, engineers, lawyers, etc. that basically don’t currently exist in large numbers, so I do think fairly broad field building is important. There are pretty tight limits to how targeted field building can be: you can target, e.g., ML versus law, and you can suggest topics, but you’re basically just creating new specialists in the fields you pick who then pursue the topics you want.
Our recent funding opportunities targeted ML and neuroscience, so more around understanding minds than things like the role in society and trade. I’d guess that we repeat this but also run one or add in suggested topics that focus more on law, trade, etc.
Realistically addressing many of the things on those lists likely also requires a mature field of understanding AI minds, so I think empirical and philosophical work on sentience feeds into it.
To get concrete, recent distribution of funds and donations we’ve advised on (which is a decent approximation of where things go) looks like ~50% field building, of which maybe 10% is on things like the role of AI minds in society (including, e.g., trade) vs understanding AI minds; 40% research, of which maybe 25% is on the role of AI minds in society; a bit more than 10% lab-facing work, and 10% other miscellaneous things like communications and preparatory policy work. Generally the things I’m most excited to grow are lab-facing work and work on the role of AI minds in society.
(I also care about “averting Al takeover” and factor that in, though it’s not the main goal and gets less weight.)
Thanks. I’m somewhat glad to hear this.
One crux is that I’m worried that broad field-building mostly recruits people to work on stuff like “are AIs conscious” and “how can we improve short-term AI welfare” rather than “how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with.” So the field-building feels approximately zero-value to me — I doubt you’ll be able to steer people toward the important stuff in the future.
A smaller crux is that I’m worried about lab-facing work similarly being poorly aimed.
Oh, clarification: it’s very possible that there aren’t great grant opportunities by my lights. It’s not like I’m aware of great opportunities that the other Zach isn’t funding. I should have focused more on expected grants than Zach’s process.
I find this distinction kind of odd. If we care about what digital minds we produce in the future, what should we be doing now?
I expect that what minds we build in large numbers in the future will be largely depend on how we answer a political question. The best way to prepare now for influencing how we as a society answer that question (in a positive way) is to build up a community with a reputation for good research, figure out the most important cruxes and what we should say about them, create a better understanding of what we should actually be aiming for, initiate valuable relationships with potential stakeholders based on mutual respect and trust, creating basic norms about human-ai relationships, and so on. To me, that looks like engaging with whether near-future AIs are conscious (or have other morally important traits) and working with stakeholders to figure out what policies make sense at what times.
Though I would have thought the posts you highlighted as work you’re more optimistic about fit squarely within that project, so maybe I’m misunderstanding you.
I’m not sure what we should be doing now! But I expect that people can make progress if they backchain from the von Neumann probes, whereas my impression is that most people entering the “digital sentience” space never think about the von Neumann probes.