If amount of happiness (or suffering) possible is not linear in the number of elementary particles, what number of elementary particles do you suggest using?
I think the excerpt is getting at “maybe all possible universes exist (no claim about likelihood made, but an assumption for the post), then it is likely that there are some possible universes—with way more resources than ours—running a simulation of our universe. the behaviour of that simulated universe is the same as ours (it’s a good simulation!) and in particular, the behaviour of the simulations of us are the same as our behaviours. If that’s true, our behaviours could, through the simulation, influence a much bigger and better-resourced world. If we value outcomes in that universe the same as in ours, maybe a lot of the value of our actions comes from their effect on the big world”.
I don’t know whether that counts as the world likely could be a simulation according to how you meant that? In particular, I don’t think Wei Dai is assuming we are more likely in a simulation than not (or, as some say, just “more in a simulation than not”).
How likely do you think it would be for standard ML research to solve the problems you’re working on in the course of trying to get good performance? Do such concerns affect your project choices much?
For the contamination sentence: what’s wrong with equipment and media sterilization? Why wouldn’t we just grow meat in sterilized equipment in managed facilities? Also, couldn’t we just sterlize after the fact?
For the sensitivity / robustness: why does it need to be robust? Can’t it just be grown in a special facility? It’s not like you can mimic the Doritos production process at home, but that doesn’t stop a lot of Doritos being made. Why would the bioreactor need to placed outside?
For waste management: This does seem necessary. But months / years of continual operation don’t seem necessary (though more efficient if it can be pulled off). If the bioreactor is shut down and sterilised intermittently, that seems like it would suffice.
For scalability: I believe you that scalability is an issue, but the examples in the 7th and 8th sentences seem unnecessary and unlike any other (roughly) nature-mimicking process we’ve chosen. Why should the bioreactor need to grow? If the volume needs to change over time, couldn’t this be achieved with a piston-like mechanism? In general, we produce things on factory lines, not via creating replicating machines. Useful replicating machines are certainly far beyond our capacity to make de novo (though we can tweak nature’s small self-replicating machines)
I’m pretty confused by your paragraph describing the “futuristic bioreactor”. It doesn’t seem like we want almost any of those features for cultured meat.
The only parts that seem like they would be needed in are “[...] assembling those molecules into muscle and fat cells, and forming those cells into the complex tissues we love to eat” and “It has precise environmental controls and gas exchange systems that keep the internal temperature, pH, and oxygen levels in the ideal range for cell and tissue growth”
Some (though not all) of the others seem like they might be useful if we were to try and make cultured meat production as decentralizable as current meat production (and far more decentralized than factory farming).
Do you think that different trajectories of prosaic TAI have big impacts on the usefulness of your current project? (For example, perhaps you think that TAI that is agentic would just be taught to deceive). If so, which? If not, could you say something about why it seems general?
(NB: the above is not supposed to imply criticism of a plan that only works in some worlds).
Does it make sense to think of your work as aimed at reducing a particular theory-practice gap? If so, which one (what theory / need input for theoretical alignment scheme)?