I don’t think I will participate in this contest, because:
pursuing AGI is an ethical no-no for me.
I like expert systems technology for what it offers.
I don’t have much background knowledge on AGI risk.
I am not comfortable with subjective probability as you use it for forecasting.
However, after reading about this prize, I have several questions that came up for me as I read it. I thought I would offer them as a good-faith effort to clarify your goals here.
There are significant risks to human well-being, aside from human extinction, that are plausible in the event of AGI or ASI development. Narrowing your question to whether extinction risk is a concern ignores various existential or suffering risks associated with AGI development. Is that what you intend? EDIT: I believe some edits were made, so this question is no longer current.
Your scenario of a company staffed by AI is implausible without additions to do with legal status of AGI entities. Those additions presume societal changes and AGI governance by existing or new laws. Can you constrain your description of a future where AGI perform tasks to make the legal distinction clear between AGI and software tools?
Your idea of AGI presumably contrasts with rentable software instances that perform tasks and rely on a common pool of knowledge and capabilities. For example, I could rent multiple John Construction Worker instances for manipulation of construction bots for a particular project. I don’t pay the instances, I just pay for them and the robotic construction equipment.
In the event that automation allows AI to perform all human tasks, robot hardware will perform human activities. Robots can have their intelligence and knowledge associated at a hardware level with their bodies. For example, their learning can occur through training of their bodies rather than solely through software downloads, and their learning can be kept as local data only. Their affective experience can be linked to the action of their bodies in particular activities, potentially. They can also bear a superficial similarity to humans, particularly if their robot bodies are humanoid and employ similar senses (vision, hearing, tactile sense). These and other differences fulfill some of the description of a future containing AGI, but have different implications for the type of extinction threat posed by robots. Is that a distinction you consider worth making for the purposes of your contest?
When you write that AGI might do work at a rate of $25/hr, that seems implausible. In particular, a human-like intelligence without the data-processing constraints of a human engaged in a single focused activity (for example, researching a topic) can do some of the planned tasks involved at rates near instantaneous compared to a human. A human might take a week to read a book that an AGI reads in a couple of milli-seconds. Puzzling through the intuitions and logical implications of what was read in the book could take a human months, but an AGI could do the ontology refinement and knowledge development in under a hundred milliseconds. Again, can you constrain your example of AGI working like humans? For example, are you referring mainly to physical labor that AGI perform, perhaps through a humanoid robot?
Assuming an AGI is performing its labor at the speed of software, not the speed of a human exercising their intellect, we can agree that an AGI will do mental labor much faster than humans. However, physical labor depends on tasks and robotic hardware limits as well as software limits, and the speed differences are not as strong between humans and robot hardware. Hardware can get better, but not many orders of magnitude faster than a human (but maybe many more times as precise or reliable).
A further complication is that redefinition of tasks can alter the resource requirements and output constraints of labor. I’m not sure how that would affect an economic model of labor. Can you specify that (1)physical/engineering limits on robotics, and (2)task redefinition, don’t matter to your economic criteria for AGI spread or is discussion of that something you are looking for in essays submitted to you?
Given the possibility that hardware requirements remain expensive to support a purely software entity, those entities might not be available for most types of work. Simpler data-processing tools and robots that seem advanced by our standards but much slower/cheaper/simpler could be widely available while actual AGI and conscious robots (or artificial life) are fairly rare, reserved for very important jobs, where human fallibility is perceived as too costly.
If the outcome of increased productivity and innovation through AI investments pans out, but without AGI participating in most economic activity, is that relevant to your philanthropic interests here? That is, is all you care about the timing of the development of the first AGI, or is it a time when AGI are common or cheap or just when automation is replacing most jobs or something else?
Thank you for your time and good luck with your contest! Also, good luck to the contestants!
From the wiki: “An existential risk is the risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential.” That can include getting permanently locked into a totalitarian dictatorship and things of that sort, even if they don’t result in extinction.
Hi!
I don’t think I will participate in this contest, because:
pursuing AGI is an ethical no-no for me.
I like expert systems technology for what it offers.
I don’t have much background knowledge on AGI risk.
I am not comfortable with subjective probability as you use it for forecasting.
However, after reading about this prize, I have several questions that came up for me as I read it. I thought I would offer them as a good-faith effort to clarify your goals here.
There are significant risks to human well-being, aside from human extinction, that are plausible in the event of AGI or ASI development. Narrowing your question to whether extinction risk is a concern ignores various existential or suffering risks associated with AGI development. Is that what you intend?
EDIT: I believe some edits were made, so this question is no longer current.
Your scenario of a company staffed by AI is implausible without additions to do with legal status of AGI entities. Those additions presume societal changes and AGI governance by existing or new laws. Can you constrain your description of a future where AGI perform tasks to make the legal distinction clear between AGI and software tools?
Your idea of AGI presumably contrasts with rentable software instances that perform tasks and rely on a common pool of knowledge and capabilities. For example, I could rent multiple John Construction Worker instances for manipulation of construction bots for a particular project. I don’t pay the instances, I just pay for them and the robotic construction equipment.
In the event that automation allows AI to perform all human tasks, robot hardware will perform human activities. Robots can have their intelligence and knowledge associated at a hardware level with their bodies. For example, their learning can occur through training of their bodies rather than solely through software downloads, and their learning can be kept as local data only. Their affective experience can be linked to the action of their bodies in particular activities, potentially. They can also bear a superficial similarity to humans, particularly if their robot bodies are humanoid and employ similar senses (vision, hearing, tactile sense). These and other differences fulfill some of the description of a future containing AGI, but have different implications for the type of extinction threat posed by robots. Is that a distinction you consider worth making for the purposes of your contest?
When you write that AGI might do work at a rate of $25/hr, that seems implausible. In particular, a human-like intelligence without the data-processing constraints of a human engaged in a single focused activity (for example, researching a topic) can do some of the planned tasks involved at rates near instantaneous compared to a human. A human might take a week to read a book that an AGI reads in a couple of milli-seconds. Puzzling through the intuitions and logical implications of what was read in the book could take a human months, but an AGI could do the ontology refinement and knowledge development in under a hundred milliseconds. Again, can you constrain your example of AGI working like humans? For example, are you referring mainly to physical labor that AGI perform, perhaps through a humanoid robot?
Assuming an AGI is performing its labor at the speed of software, not the speed of a human exercising their intellect, we can agree that an AGI will do mental labor much faster than humans. However, physical labor depends on tasks and robotic hardware limits as well as software limits, and the speed differences are not as strong between humans and robot hardware. Hardware can get better, but not many orders of magnitude faster than a human (but maybe many more times as precise or reliable).
A further complication is that redefinition of tasks can alter the resource requirements and output constraints of labor. I’m not sure how that would affect an economic model of labor. Can you specify that (1)physical/engineering limits on robotics, and (2)task redefinition, don’t matter to your economic criteria for AGI spread or is discussion of that something you are looking for in essays submitted to you?
Given the possibility that hardware requirements remain expensive to support a purely software entity, those entities might not be available for most types of work. Simpler data-processing tools and robots that seem advanced by our standards but much slower/cheaper/simpler could be widely available while actual AGI and conscious robots (or artificial life) are fairly rare, reserved for very important jobs, where human fallibility is perceived as too costly.
If the outcome of increased productivity and innovation through AI investments pans out, but without AGI participating in most economic activity, is that relevant to your philanthropic interests here? That is, is all you care about the timing of the development of the first AGI, or is it a time when AGI are common or cheap or just when automation is replacing most jobs or something else?
Thank you for your time and good luck with your contest!
Also, good luck to the contestants!
:)
What’s the difference between extinction risk and existential risk?
From the wiki: “An existential risk is the risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential.” That can include getting permanently locked into a totalitarian dictatorship and things of that sort, even if they don’t result in extinction.
Thank you! And doubly thank you for the topic link. In case others are confused, I found the end of this post particularly clear https://forum.effectivealtruism.org/posts/qFdifovCmckujxEsq/existential-risk-is-badly-named-and-leads-to-narrow-focus-on