With respect to 2, I’m thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that don’t produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/functions. Smaller brains are easier/faster to run in parallel.
This is assuming the probability of consciousness doesn’t dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I don’t think it would be too surprising to find the optimal average in the marginally good range.
That all seems fair to me.
With respect to 2, I’m thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that don’t produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/functions. Smaller brains are easier/faster to run in parallel.
This is assuming the probability of consciousness doesn’t dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I don’t think it would be too surprising to find the optimal average in the marginally good range.