I personally would compete in this prize competition, but only if I were free to explore:
P(misalignment x-risk|AGI): Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to concentration of power derived from AGI technology.
You wrote:
Here is a table identifying various questions about these scenarios that we believe are central, our current position on the question (for the sake of concreteness), and alternative positions that would significantly alter the Future Fund’s thinking about the future of AI:
Proposition
Current position
Lower prize threshold
Upper prize threshold
“P(misalignment x-risk|AGI)”: Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI
15%
7%
35%
AGI will be developed by January 1, 2043
20%
10%
45%
AGI will be developed by January 1, 2100
60%
30%
N/A
but this list does not include the conditional probability that interests me.
You wrote:
With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease.
This seems really motivating. You identify:
global poverty
animal suffering
early death
debilitating disease
as problems that TAI could help humanity solve.
I will offer briefly that humans are sensitive to changes in their behaviors, at least as seen in advance, that deprive them of choices they have already made. We cause:
global poverty through economic systems that support exploitation of developing countries and politically-powerless people (e.g., through corporate capitalism and military coups)
animal suffering through widespread factory farming (enough to dominate terrestrial vertebrate populations globally with our farm animals) and gradual habitat destruction (enough to threaten the extinction of a million species)
early death through lifestyle-related debilitating disease (knock-on effects of lifestyle choices in affluent countries now spread throughout the globe).
So these TAI would apparently resolve, through advances in science and technology, various immediate causes, with a root cause found in our appetite (for wealth, power, meat, milk, and unhealthy lifestyles). Of course, there are other reasons for debilitating disease and early death than human appetite. However, your claim implies to me that we invent robots and AI to either reduce or feed our appetites harmlessly.
Causes of global poverty, animal suffering, some debilitating diseases, and early human death are maintained by incentive structures that benefit a subset of the global population. TAI will apparently remove those incentive structures, but not by any mechanism that I believe really requires TAI. Put differently, once TAI can really change our incentive structures that much, then they or their controlling actors are already in control of humanity’s choices. I doubt that we want that control over us[1].
You wrote:
But two formidable new problems for humanity could also arise:
Loss of control to AI systems Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.
Concentration of power Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future.
Right. So if whatever actor with an edge in AI develops AGI, that actor might not share code or hardware technologies required with many other actors. The result will be concentration of power into those actors with control of AGI’s.
Absent the guarantee of autonomy and rights to AGI (whether pure software or embodied in robots), the persistence of that power concentration will require that those actors are benevolent controllers of the rest of humanity. It’s plausible that those actors will be either government or corporate. It’s also plausible that those can become fundamentally benign or are in control already. If not, then the development of AI immediately implies problem 2 (concentration of political/economic/military power from AGI into those who misuse the technology).
If we do ensure the autonomy and rights of AGI (software or embodied), then we had better hope that, with loss of control of AGI, we do not develop loss of control to AGI. Or else we are faced with problem 1 (loss of control to AGI). If we do include AGI in our moral circles, as we should for beings with consciousness and intelligence equal to or greater than our own, then we will ensure their autonomy and rights.
The better approach of course is to do our best to align them with our interests in advance of their ascendance to full autonomy and citizen status, so that they themselves are benevolent and humble, willing to act like our equals and co-exist in our society peacefully.
You wrote:
Imagine a world where cheap AI systems are fully substitutable for human labor. E.g., for any human who can do any job, there is a computer program (not necessarily the same one every time) that can do the same job for $25/hr or less. This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs.
Companies that rely on prison labor or labor without political or economic power can and will exploit that labor. I consider that common knowledge. If you look into how most of our products are made overseas, you’ll find that manufacturing and service workers globally do not enjoy the same power as some workers in the US[2], at least not so far.
The rise of companies that treat AGI like slaves or tools will be continuing an existing precedent, but one that globalization conceals to some degree (for example, through employment of contractors overseas). Either way, those companies will be violating ethical norms of treatment of people. This appears to be in violation of your ethical concerns about the welfare of people (for example, humans and farm animals). Expansion of those violations is an s-risk.
At this point I want to qualify my requirements to participate in this contest further.
I would participate in this contest, but only if I could explore the probability that I stated earlier[3] and you or FTX Philanthropy offer some officially stated and appropriately qualified beliefs about:
whether you consider humans to have economic rights (in contrast to capitalism which is market or monopoly-driven)
the political and economic rights and power of labor globally
how AGI allow fast economic growth in the presence of widespread human unemployment
how AGI employment differs from AI tool use
what criteria you hold for giving full legal rights to autonomous software agents and AGI embodied in robots enough to differentiate them from tools
how you distinguish AGI from ASI (for example, orders of magnitude enhanced speed of application of human-like capability is, to some, superhuman)
your criteria for an AGI acquiring both consciousness and affective experience
the role of automation[4] in driving job creation[5] and your beliefs around technological unemployment [6]and wealth inequality
what barriers[7] you believe exist to automation in driving productivity and economic growth.
I wrote about a few errors that longtermists will make in their considerations about control over populations. This control involving TAI might include all the errors I mentioned.
People also place a lot of confidence in their own intellectual abilities and faith in their value to organizations. To see this still occurring in the face of advances in AI is actually disheartening. The same confusion clouds insight into the problems that AI pose to human beings and society at large, particularly in our capitalist society that expects us to sell ourselves to employers.
P(misalignment x-risk|AGI): Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to concentration of power derived from AGI technology.
Automation with AI tools is, at least in the short-term, not creating new jobs and employment overall. Or so I believe. However, it can drive productivity growth without increasing employment, and in fact, economic depression is one reason to for business to invest in inexpensive automation that lowers costs. This is when the cost-cutters get to work and the consultants are called in to help.
New variations in crowd-sourcing (such as these contests) and mechanical turk sort of work can substitute for traditional labor with significant cost reductions for financial entities. This is (potentially) paid labor but not work as it was once defined.
Shifting work onto consumers (for example, as I am in asking for additional specification from your organization) is another common approach to reducing costs. This is a simple reframe of a service into an expectation. Now you pump your own gas, ring your own groceries, balance your own books, write your own professional correspondence, do your own research, etc. It drives a reduction in employment without a corresponding increase elsewhere.
One reason that automation doesn’t always catch on is that while management have moderate tolerance for mistakes by people, they have low tolerance for mistakes by machines. Put differently, they apply uneven standards to machines vs people.
Another reason is that workers sometimes resist automation, criticizing and marginalizing its use whenever possible.
I personally would compete in this prize competition, but only if I were free to explore:
P(misalignment x-risk|AGI): Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to concentration of power derived from AGI technology.
You wrote:
but this list does not include the conditional probability that interests me.
You wrote:
This seems really motivating. You identify:
global poverty
animal suffering
early death
debilitating disease
as problems that TAI could help humanity solve.
I will offer briefly that humans are sensitive to changes in their behaviors, at least as seen in advance, that deprive them of choices they have already made. We cause:
animal suffering through widespread factory farming (enough to dominate terrestrial vertebrate populations globally with our farm animals) and gradual habitat destruction (enough to threaten the extinction of a million species)
early death through lifestyle-related debilitating disease (knock-on effects of lifestyle choices in affluent countries now spread throughout the globe).
So these TAI would apparently resolve, through advances in science and technology, various immediate causes, with a root cause found in our appetite (for wealth, power, meat, milk, and unhealthy lifestyles). Of course, there are other reasons for debilitating disease and early death than human appetite. However, your claim implies to me that we invent robots and AI to either reduce or feed our appetites harmlessly.
Causes of global poverty, animal suffering, some debilitating diseases, and early human death are maintained by incentive structures that benefit a subset of the global population. TAI will apparently remove those incentive structures, but not by any mechanism that I believe really requires TAI. Put differently, once TAI can really change our incentive structures that much, then they or their controlling actors are already in control of humanity’s choices. I doubt that we want that control over us[1].
You wrote:
Right. So if whatever actor with an edge in AI develops AGI, that actor might not share code or hardware technologies required with many other actors. The result will be concentration of power into those actors with control of AGI’s.
Absent the guarantee of autonomy and rights to AGI (whether pure software or embodied in robots), the persistence of that power concentration will require that those actors are benevolent controllers of the rest of humanity. It’s plausible that those actors will be either government or corporate. It’s also plausible that those can become fundamentally benign or are in control already. If not, then the development of AI immediately implies problem 2 (concentration of political/economic/military power from AGI into those who misuse the technology).
If we do ensure the autonomy and rights of AGI (software or embodied), then we had better hope that, with loss of control of AGI, we do not develop loss of control to AGI. Or else we are faced with problem 1 (loss of control to AGI). If we do include AGI in our moral circles, as we should for beings with consciousness and intelligence equal to or greater than our own, then we will ensure their autonomy and rights.
The better approach of course is to do our best to align them with our interests in advance of their ascendance to full autonomy and citizen status, so that they themselves are benevolent and humble, willing to act like our equals and co-exist in our society peacefully.
You wrote:
Companies that rely on prison labor or labor without political or economic power can and will exploit that labor. I consider that common knowledge. If you look into how most of our products are made overseas, you’ll find that manufacturing and service workers globally do not enjoy the same power as some workers in the US[2], at least not so far.
The rise of companies that treat AGI like slaves or tools will be continuing an existing precedent, but one that globalization conceals to some degree (for example, through employment of contractors overseas). Either way, those companies will be violating ethical norms of treatment of people. This appears to be in violation of your ethical concerns about the welfare of people (for example, humans and farm animals). Expansion of those violations is an s-risk.
At this point I want to qualify my requirements to participate in this contest further.
I would participate in this contest, but only if I could explore the probability that I stated earlier[3] and you or FTX Philanthropy offer some officially stated and appropriately qualified beliefs about:
whether you consider humans to have economic rights (in contrast to capitalism which is market or monopoly-driven)
the political and economic rights and power of labor globally
how AGI allow fast economic growth in the presence of widespread human unemployment
how AGI employment differs from AI tool use
what criteria you hold for giving full legal rights to autonomous software agents and AGI embodied in robots enough to differentiate them from tools
how you distinguish AGI from ASI (for example, orders of magnitude enhanced speed of application of human-like capability is, to some, superhuman)
your criteria for an AGI acquiring both consciousness and affective experience
the role of automation[4] in driving job creation[5] and your beliefs around technological unemployment [6]and wealth inequality
what barriers[7] you believe exist to automation in driving productivity and economic growth.
I wrote about a few errors that longtermists will make in their considerations about control over populations. This control involving TAI might include all the errors I mentioned.
People also place a lot of confidence in their own intellectual abilities and faith in their value to organizations. To see this still occurring in the face of advances in AI is actually disheartening. The same confusion clouds insight into the problems that AI pose to human beings and society at large, particularly in our capitalist society that expects us to sell ourselves to employers.
P(misalignment x-risk|AGI): Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to concentration of power derived from AGI technology.
Automation with AI tools is, at least in the short-term, not creating new jobs and employment overall. Or so I believe. However, it can drive productivity growth without increasing employment, and in fact, economic depression is one reason to for business to invest in inexpensive automation that lowers costs. This is when the cost-cutters get to work and the consultants are called in to help.
New variations in crowd-sourcing (such as these contests) and mechanical turk sort of work can substitute for traditional labor with significant cost reductions for financial entities. This is (potentially) paid labor but not work as it was once defined.
Shifting work onto consumers (for example, as I am in asking for additional specification from your organization) is another common approach to reducing costs. This is a simple reframe of a service into an expectation. Now you pump your own gas, ring your own groceries, balance your own books, write your own professional correspondence, do your own research, etc. It drives a reduction in employment without a corresponding increase elsewhere.
One reason that automation doesn’t always catch on is that while management have moderate tolerance for mistakes by people, they have low tolerance for mistakes by machines. Put differently, they apply uneven standards to machines vs people.
Another reason is that workers sometimes resist automation, criticizing and marginalizing its use whenever possible.