This reads to me like you’re saying “these problems are hard [so Wei Dai is over-rating the importance of working on them]”, whereas the inference I would make is “these problems are hard, so we need to slow down AI development, otherwise we won’t be able to solve them in time.”
I didn’t meant to imply that Wei Dai was overrating the problems’ importance. I agree they’re very important! I was making the case that they’re also very intractable.
If I thought solving these problems pre-TAI would be a big increase to the EV of the future, I’d take their difficulty to be a(nother) reason to slow down AI development. But I think I’m more optimistic than you and Wei Dai about waiting until we have smart AIs to help us on these problems.
I said a little in another thread. If we get aligned AI, I think it’ll likely be a corrigible assistant that doesn’t have its own philosophical views that it wants to act on. And then we can use these assistants to help us solve philosophical problems. I’m imagining in particular that these AIs could be very good at mapping logical space, tracing all the implications of various views, etc. So you could ask a question and receive a response like: ‘Here are the different views on this question. Here’s why they’re mutually exclusive and jointly exhaustive. Here are all the most serious objections to each view. Here are all the responses to those objections. Here are all the objections to those responses,’ and so on. That would be a huge boost to philosophical progress. Progress has been slow so far because human philosophers take entire lifetimes just to fill in one small part of this enormous map, and because humans make errors so later philosophers can’t even trust that small filled-in part, and because verification in philosophy isn’t much quicker than generation.
The argument tree (arguments, counterarguments, counter-counterarguments, and so on) is exponentially sized and we don’t know how deep or wide we need to expand it, before some problem can be solved. We do know that different humans looking at the same partial tree (i.e., philosophers who have read the same literature on some problem) can have very different judgments as to what the correct conclusion is. There’s also a huge amount of intuition/judgment involved in choosing which part of the tree to focus on or expand further. With AIs helping to expand the tree for us, there are potential advantages like you mentioned, but also potential disadvantages, like AIs not having good intuition/judgment about what lines of arguments to pursue, or the argument tree (or AI-generated philosophical literature) becoming too large for any humans to read and think about in a relevant time frame. Many will be very tempted to just let AIs answer the questions / make the final conclusions for us, especially if AIs also accelerate technological progress, creating many urgent philosophical problems related to how to use them safely and beneficially. Or if humans try to make the conclusions, can easily get them wrong despite AI help with expanding the argument tree.
So I think undergoing the AI transition without solving metaphilosophy, or making AIs autonomously competent at philosophy (good at getting correct conclusions by themselves) is enormously risky, even if we have corrigible AIs helping us.
This reads to me like you’re saying “these problems are hard [so Wei Dai is over-rating the importance of working on them]”, whereas the inference I would make is “these problems are hard, so we need to slow down AI development, otherwise we won’t be able to solve them in time.”
I didn’t meant to imply that Wei Dai was overrating the problems’ importance. I agree they’re very important! I was making the case that they’re also very intractable.
If I thought solving these problems pre-TAI would be a big increase to the EV of the future, I’d take their difficulty to be a(nother) reason to slow down AI development. But I think I’m more optimistic than you and Wei Dai about waiting until we have smart AIs to help us on these problems.
Do you want to talk about why you’re relatively optimistic? I’ve tried to explain my own concerns/pessimism at https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy and https://forum.effectivealtruism.org/posts/axSfJXriBWEixsHGR/ai-doing-philosophy-ai-generating-hands.
I said a little in another thread. If we get aligned AI, I think it’ll likely be a corrigible assistant that doesn’t have its own philosophical views that it wants to act on. And then we can use these assistants to help us solve philosophical problems. I’m imagining in particular that these AIs could be very good at mapping logical space, tracing all the implications of various views, etc. So you could ask a question and receive a response like: ‘Here are the different views on this question. Here’s why they’re mutually exclusive and jointly exhaustive. Here are all the most serious objections to each view. Here are all the responses to those objections. Here are all the objections to those responses,’ and so on. That would be a huge boost to philosophical progress. Progress has been slow so far because human philosophers take entire lifetimes just to fill in one small part of this enormous map, and because humans make errors so later philosophers can’t even trust that small filled-in part, and because verification in philosophy isn’t much quicker than generation.
The argument tree (arguments, counterarguments, counter-counterarguments, and so on) is exponentially sized and we don’t know how deep or wide we need to expand it, before some problem can be solved. We do know that different humans looking at the same partial tree (i.e., philosophers who have read the same literature on some problem) can have very different judgments as to what the correct conclusion is. There’s also a huge amount of intuition/judgment involved in choosing which part of the tree to focus on or expand further. With AIs helping to expand the tree for us, there are potential advantages like you mentioned, but also potential disadvantages, like AIs not having good intuition/judgment about what lines of arguments to pursue, or the argument tree (or AI-generated philosophical literature) becoming too large for any humans to read and think about in a relevant time frame. Many will be very tempted to just let AIs answer the questions / make the final conclusions for us, especially if AIs also accelerate technological progress, creating many urgent philosophical problems related to how to use them safely and beneficially. Or if humans try to make the conclusions, can easily get them wrong despite AI help with expanding the argument tree.
So I think undergoing the AI transition without solving metaphilosophy, or making AIs autonomously competent at philosophy (good at getting correct conclusions by themselves) is enormously risky, even if we have corrigible AIs helping us.