Iâm now over 20 minutes in and havenât quite figured out what youâre looking for. Just to dump my thoughtsânot necessarily looking for a response:
On the one hand it says âOur goal is to discover creative ways to use AI for Fermi estimationâ but on the other hand it says âAI tools to generate said estimates arenât required, but we expect them to help.â
From the Evaluation Rubric, âmodel qualityâ is only 20%, so it seems like the primary goal is neither to create a good âmodelâ (which I understand to mean a particular method for making a Fermi estimate on a particular question) nor to see if AI tools can be used to create such models.
The largest score (40%) is whether the *result* of the model that is created (i.e. the actual estimate that the model spits out with the numbers put into it) is surprising or not, with more surprising being better. But itâs unclear to me if the estimate actually needs to be believed or not for it to be surprising. Extreme numbers could just mean that the output is bad or wrong and not that the output should be evidence of anything.
Weâre just looking for a final Fermi model. You can use or not use AI to come up with this.
âSurpriseâ is important because thatâs arguably what makes a model interesting. As in, if you have a big model about the expected impact of AI, and then it tells you the answer you started out expecting, then arguably itâs not an incredibly useful model.
The specific âSurpriseâ part of the rubric doesnât require the model to be great, but the other parts of the rubric do weight that. So if you have a model thatâs very surprising but otherwise poor, then it might do well on the âSurpriseâ measure, but wonât on the other measures, so on average will get a mediocre score.
âOn the one hand it says âOur goal is to discover creative ways to use AI for Fermi estimationâ but on the other hand it says âAI tools to generate said estimates arenât required, but we expect them to help.â
-> Weâre not forcing people to use AI, in part because it would be difficult to verify. But I expect that many people will do so, so I still expect this to be interesting.
Iâm now over 20 minutes in and havenât quite figured out what youâre looking for. Just to dump my thoughtsânot necessarily looking for a response:
On the one hand it says âOur goal is to discover creative ways to use AI for Fermi estimationâ but on the other hand it says âAI tools to generate said estimates arenât required, but we expect them to help.â
From the Evaluation Rubric, âmodel qualityâ is only 20%, so it seems like the primary goal is neither to create a good âmodelâ (which I understand to mean a particular method for making a Fermi estimate on a particular question) nor to see if AI tools can be used to create such models.
The largest score (40%) is whether the *result* of the model that is created (i.e. the actual estimate that the model spits out with the numbers put into it) is surprising or not, with more surprising being better. But itâs unclear to me if the estimate actually needs to be believed or not for it to be surprising. Extreme numbers could just mean that the output is bad or wrong and not that the output should be evidence of anything.
Thanks for the feedback!
Weâre just looking for a final Fermi model. You can use or not use AI to come up with this.
âSurpriseâ is important because thatâs arguably what makes a model interesting. As in, if you have a big model about the expected impact of AI, and then it tells you the answer you started out expecting, then arguably itâs not an incredibly useful model.
The specific âSurpriseâ part of the rubric doesnât require the model to be great, but the other parts of the rubric do weight that. So if you have a model thatâs very surprising but otherwise poor, then it might do well on the âSurpriseâ measure, but wonât on the other measures, so on average will get a mediocre score.
Note that there were a few submissions on LessWrong so far, those might make things clearer:
https://ââwww.lesswrong.com/ââposts/ââAA8GJ7Qc6ndBtJxv7/ââusd300-fermi-model-competition#comments
âOn the one hand it says âOur goal is to discover creative ways to use AI for Fermi estimationâ but on the other hand it says âAI tools to generate said estimates arenât required, but we expect them to help.â
-> Weâre not forcing people to use AI, in part because it would be difficult to verify. But I expect that many people will do so, so I still expect this to be interesting.