I’m working on a collection of metaculus.com questions intended to generate AI domain specific forecasting insights. These questions are intended to resolve in the 1-15 year range, and my hope is that if they’re sufficiently independent, we’ll get a range of positive and negative resolutions which will inform future forecasts.
I’ve already gotten a couple of them live, and am hoping for feedback on the rest:
4. (WIP) When will reinforcement learning methods achieve sample efficiency within four orders of magnitude of human efficiency?*
5. (WIP) When will unsupervised learning methods achieve human level performance on image classification?
The more questions the better, so please make suggestions. Of course we have to avoid burdening the good folks working at metaculus, so 8-10 questions is probably the maximum I’d be willing to personally submit.
*I am not very familiar with reinforcement learning, so input here would be particularly helpful! What is the best way to operationalize this question? How many orders of magnitude? Is there a relevant benchmark? etc. I’d be happy for someone else to take the credit, and post the question themselves as well!
Yes, I recently asked a metaculus mod about this, and they said they’re hoping to bring back the ai.metaculus sub-domain eventually. For now, I’m submitting everything to the metaculus main domain.
Medium term AI forecasting with Metaculus
I’m working on a collection of metaculus.com questions intended to generate AI domain specific forecasting insights. These questions are intended to resolve in the 1-15 year range, and my hope is that if they’re sufficiently independent, we’ll get a range of positive and negative resolutions which will inform future forecasts.
I’ve already gotten a couple of them live, and am hoping for feedback on the rest:
1. When will AI out-perform humans on argument reasoning tasks?
2. When will multi-modal ML out-perform uni-modal ML?
3. (Not by me) When will image recognition be made robust against unrestricted adversary?
4. (WIP) When will reinforcement learning methods achieve sample efficiency within four orders of magnitude of human efficiency?*
5. (WIP) When will unsupervised learning methods achieve human level performance on image classification?
The more questions the better, so please make suggestions. Of course we have to avoid burdening the good folks working at metaculus, so 8-10 questions is probably the maximum I’d be willing to personally submit.
*I am not very familiar with reinforcement learning, so input here would be particularly helpful! What is the best way to operationalize this question? How many orders of magnitude? Is there a relevant benchmark? etc. I’d be happy for someone else to take the credit, and post the question themselves as well!
You might be familiar with https://ai.metaculus.com/questions/. It went dormant unfortunately.
Yes, I recently asked a metaculus mod about this, and they said they’re hoping to bring back the ai.metaculus sub-domain eventually. For now, I’m submitting everything to the metaculus main domain.