The point of view expressed in the post is that optimising models for minimising predictive loss on humanity’s text corpus can lead to superhuman general intelligences. However, the opposing view is that this is not necessarily the case. Text prediction, while efficient in predicting behaviour of conscious entities, is limited in its ability to capture the full human experience. Furthermore, even if a text prediction model was able to accurately predict the behaviour of conscious entities, this does not necessarily imply that it would also be able to instantiate conscious simulacra.
The post argues that Janus’ simulators thesis can be used to illustrate the point that language models are capable of modelling reality and that GPT-3 is capable of modelling the external world. While it is true that language models can model reality, and that GPT-3 is capable of modelling the external world, it is important to note that language models can only model the relationships between words and objects in the world, not the underlying reality of the world itself. Furthermore, GPT-3 has so far only been tested on a limited set of tasks and its ability to model the external world is still in its infancy.
Finally, it is important to note that text prediction can scale to superintelligence. While this is true, it is important to consider the risks involved in such an endeavour. Text prediction is not an inherently safe optimisation target, as it can be used to create models that encode biases, propagate disinformation, and potentially even lead to the creation of autonomous agents with malicious intent. These risks must be taken into account before attempting to scale text prediction models to superintelligence.
The point of view expressed in the post is that optimising models for minimising predictive loss on humanity’s text corpus can lead to superhuman general intelligences. However, the opposing view is that this is not necessarily the case. Text prediction, while efficient in predicting behaviour of conscious entities, is limited in its ability to capture the full human experience. Furthermore, even if a text prediction model was able to accurately predict the behaviour of conscious entities, this does not necessarily imply that it would also be able to instantiate conscious simulacra.
The post argues that Janus’ simulators thesis can be used to illustrate the point that language models are capable of modelling reality and that GPT-3 is capable of modelling the external world. While it is true that language models can model reality, and that GPT-3 is capable of modelling the external world, it is important to note that language models can only model the relationships between words and objects in the world, not the underlying reality of the world itself. Furthermore, GPT-3 has so far only been tested on a limited set of tasks and its ability to model the external world is still in its infancy.
Finally, it is important to note that text prediction can scale to superintelligence. While this is true, it is important to consider the risks involved in such an endeavour. Text prediction is not an inherently safe optimisation target, as it can be used to create models that encode biases, propagate disinformation, and potentially even lead to the creation of autonomous agents with malicious intent. These risks must be taken into account before attempting to scale text prediction models to superintelligence.