OPEC for a slow AGI takeoff

The topic of AI is a matter of discussion and controversy among many, including experts who cannot reach a consensus on AGI and AI safety. This is exemplified by three pioneers of deep learning and winners of the Turing Award 2018 - Yoshua Bengio, Geoffrey Hinton, and Yann Le Cunn—who have different opinions. While Hinton previously fell on the more benign side of the spectrum, he has since updated his perspective on AI and now recognizes the importance of controlling it. He stated that “we have to think hard about how to control it.” Despite this, he still falls somewhere in the middle of the spectrum, unlike Yoshua Bengio who advocates for an immediate pause in training for AI systems more powerful than GPT-4, and Yann Le Cunn who disagrees with a moratorium on training models larger than GPT-4 and thinks that the AI alignment problem has been overemphasized.

That said, it is probably more prudent to exercise caution, particularly when it comes to a risky topic like a potentially unaligned AGI. It is better to be safe than sorry, as the consequences of a mistake in this area could be catastrophic.

Further, by an unaligned AGI, I mean an AGI that does not carry out its operator’s intended actions, not anything related to its deviation from human values or other related factors, as those aspects can vary.

Many individuals support the notion that the pace of developing and training LLMs, such as GPT-4, will inevitably slow down due to the scarcity of available or accessible data, given that the most recent models have been trained on nearly the entire internet.

To advance this argument, I will disregard that aspect and presume that training will probably persist using data from the internet. Instead, I will focus on strategies to impede the pace of training as much as feasible.

Before delving into my argument, let’s take an analogous approach and examine the situation of oil before and after the formation of OPEC.

The Organization of the Petroleum Exporting Countries (OPEC) was formed in September 1960 by five founding members—Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela. The formation of OPEC was a response to the dominance of Western oil companies such as the Royal Dutch Shell, and various Standard Oil companies which controlled the production, pricing, and marketing of oil from the Middle East. Initially, the OPEC saw limited success, but everything changed after the 1973 Yom Kippur Arab—Israeli War and the subsequent oil embargo imposed by OPEC on the US and other Western countries gave the organization a significant boost in power and influence. The oligopolies in the Middle East that formed the cartel realized the potential power they could wield over the West. The embargo led to a global oil crisis and a sharp rise in oil prices, which quadrupled in just a few months. This demonstrated the ability of OPEC to wield its collective power and influence global oil markets.

As oil production reached its maximum capacity and the United States lacked enough resources to support any potential production lapses, the balance of power shifted from the Western oil companies to OPEC. This led to OPEC taking control of the oil market, which made oil no longer an infinite resource that could be exploited for industry profit as needed. Instead, OPEC began to control the supply and price of oil, thereby shifting the power dynamic in the industry.

Consider a scenario where data takes the place of oil and AGI research-related companies, such as OpenAI, take the place of oil companies. To effectively slow down AGI research, one could limit the apparently infinite resource of data by enacting stronger data protection laws.

This would allow individuals to sue companies for any remote connection that can be made between the output of a chat-based LLM and the data they have created, written, or uploaded to the internet. The world requires a data cartel, similar to OPEC, that would oblige these companies to pay for the data they intend to train their language models on.

This would result in a significant slowdown, particularly of the kind of AGI research that is currently expanding exponentially by feeding data to the model for it to learn. If companies are no longer able to rely solely on data, they will need to research and develop smarter and better ways to build strong LLMs, which would require a substantial amount of time. This time would provide AI safety researchers with a slow takeoff and ample time to prepare for the potential arrival of AGI and figure out the best way for humanity to manage it.

In conclusion, by implementing stronger data protection laws and a data cartel that obliges companies to pay for the data they use to train their language models, a significant slowdown in AGI research can be achieved. This would give AI safety researchers more time to prepare for the potential arrival of AGI and devise strategies to manage it effectively. A slower takeoff for AGI could also give companies the opportunity to develop smarter and better ways to build strong LLMs. Ultimately, regulating data could lead to a safer and more manageable future for AI, one that benefits humanity as a whole.

No comments.