With digital sentiences, we don’t have homology. They aren’t based in brains, and they evolved by a different kind of selective process.
This assumes that the digital sentiences we are discussing are LLM based. This is certainly a likely near-term possibility, maybe even occuring already. People are already experimenting with how conscious LLMs are and how they could be made more conscious.
In the future, however, many more things are possible. Digital people who are based on emulations of the human brain are being worked on. Within the next few years we’ll have to decide as a society what regulation to put in place around that. Such beings would have a great deal of homology with human brains, depending on the accuracy of the emulation.
Yeah, rather than Roman’s argument feeling to me like a reason not to use Squiggle, this feels more like a reason for Squiggle to incorporate some python behind the scenes.
I think the target audience of squiggle is people who aren’t comfortable with complex code, but who are comfortable with probabilistic thinking.
Seems like having a set of structured queries for LLMs, plus the custom squiggle code, plus allowing the models to improvise python and JS code… Could be a powerful tool that would be much easier for most people to use.