Just to mention that with sufficiently good simulation technology, experimental data may not be necessary, and if experimental data sets your timescale then things could happen a lot faster than you’re estimating. We don’t have that tech now, but in at least some domains it has the shape of something that could be solved with lots of cognitive resources thrown at the problem.
I’m thinking specifically about simulating systems of large (but still microscopic) numbers of atoms, where we know the relevant physical laws and mostly struggle to approximate them in realistic ways.
My intuition here is rough, but I think the core factors driving it are:
Current R&D structures really don’t incentivize building good simulation tools outside of narrow domains.
In academia physical simulation tools are often only valued for the novel results they produce, and it can be hard to fund simulation-development efforts (particularly involving multiple people, which is often what’s needed).
In industry there’s no reason to develop a tool with broader applicability than the domain you need it for, so you get more narrowly-tailored tooling than you’d need to do totally-transformative R&D.
It’s often not that hard to devise approximations that work in one domain or another, but it is very tedious to “stitch” the different approximations together into something that works over a broader domain. This further incentivizes people away from building broadly useful simulation tools.
There has been a fair amount of success using neural networks to directly approximate physical systems (by training them on expensive brute-forced simulations, or by framing the simulation as an optimization problem and using the neural network as the anzats). E.g. the quantum many-body problem and turbulence closures and cosmology simulation.
Just to mention that with sufficiently good simulation technology, experimental data may not be necessary, and if experimental data sets your timescale then things could happen a lot faster than you’re estimating. We don’t have that tech now, but in at least some domains it has the shape of something that could be solved with lots of cognitive resources thrown at the problem.
I’m thinking specifically about simulating systems of large (but still microscopic) numbers of atoms, where we know the relevant physical laws and mostly struggle to approximate them in realistic ways.
My intuition here is rough, but I think the core factors driving it are:
Current R&D structures really don’t incentivize building good simulation tools outside of narrow domains.
In academia physical simulation tools are often only valued for the novel results they produce, and it can be hard to fund simulation-development efforts (particularly involving multiple people, which is often what’s needed).
In industry there’s no reason to develop a tool with broader applicability than the domain you need it for, so you get more narrowly-tailored tooling than you’d need to do totally-transformative R&D.
It’s often not that hard to devise approximations that work in one domain or another, but it is very tedious to “stitch” the different approximations together into something that works over a broader domain. This further incentivizes people away from building broadly useful simulation tools.
There has been a fair amount of success using neural networks to directly approximate physical systems (by training them on expensive brute-forced simulations, or by framing the simulation as an optimization problem and using the neural network as the anzats). E.g. the quantum many-body problem and turbulence closures and cosmology simulation.