Interesting idea for a competition, but I don’t think that the contest rules as designed and, more specifically, the information hazard policy, are well thought out for any submissions that follow the below line of argumentation when attempting to make the case for longer timelines:
Scaling current deep learning approaches in both compute and data will not be sufficient to achieve AGI, at least within the timeline specified by the competition
This is due to some critical component missing in the design of current deep neural networks
Supposing that this critical component is being ignored by current lines of research and/or has otherwise been deemed intractable, AGI development is likely to proceed slower than the current assumed status quo
The Future Fund should therefore shift some portion of their probability mass for the development of AGI further into the future
Personally, I find the above arguments one of the more compelling cases for longer timelines. However, a crux of these arguments holding true is that these critical components are in fact largely ignored or deemed intractable by current researchers. Making that claim necessarily involves explaining the technology, component, method, etc. in question, which could justifiably be deemed an information hazard, even if we are only describing why this element may be critical rather than how it could be built.
Seems like this type of submission would likely be disqualified despite being exactly the kind of information needed to make informed funding decisions, no?
Interesting idea for a competition, but I don’t think that the contest rules as designed and, more specifically, the information hazard policy, are well thought out for any submissions that follow the below line of argumentation when attempting to make the case for longer timelines:
Scaling current deep learning approaches in both compute and data will not be sufficient to achieve AGI, at least within the timeline specified by the competition
This is due to some critical component missing in the design of current deep neural networks
Supposing that this critical component is being ignored by current lines of research and/or has otherwise been deemed intractable, AGI development is likely to proceed slower than the current assumed status quo
The Future Fund should therefore shift some portion of their probability mass for the development of AGI further into the future
Personally, I find the above arguments one of the more compelling cases for longer timelines. However, a crux of these arguments holding true is that these critical components are in fact largely ignored or deemed intractable by current researchers. Making that claim necessarily involves explaining the technology, component, method, etc. in question, which could justifiably be deemed an information hazard, even if we are only describing why this element may be critical rather than how it could be built.
Seems like this type of submission would likely be disqualified despite being exactly the kind of information needed to make informed funding decisions, no?