Agreed. I’d add that the EV of intellectual progress in certain fields could also be net negative for other reasons. What resonates with me most is concerns about increasing existential risk (e.g., because some dangerous technological development is accelerated; see also differential progress). But there could also be other downsides, such as risks of increasing meat consumption or environmental damage (e.g., via economic and population growth).
But this will vary from field to field, and based on many other factors, and there are of course many benefits to progress in many areas as well. I just raise this as a possibility/consideration.
Yea, I think there’s a similar concern any time you make other fields more well run. That said, as a rule of thumb, this seems a lot safer than making many other fields less well run. It would be great to be able to apply intellectual abilities selectively, but when that’s too hard, doing it generally seems fairly good to me.
Agreed. I’d add that the EV of intellectual progress in certain fields could also be net negative for other reasons. What resonates with me most is concerns about increasing existential risk (e.g., because some dangerous technological development is accelerated; see also differential progress). But there could also be other downsides, such as risks of increasing meat consumption or environmental damage (e.g., via economic and population growth).
But this will vary from field to field, and based on many other factors, and there are of course many benefits to progress in many areas as well. I just raise this as a possibility/consideration.
Yea, I think there’s a similar concern any time you make other fields more well run. That said, as a rule of thumb, this seems a lot safer than making many other fields less well run. It would be great to be able to apply intellectual abilities selectively, but when that’s too hard, doing it generally seems fairly good to me.