What kinds of predictions do we make? Here are some examples:
Who suggests the outcomes that are predicted? Are these taken from the application or from a predefined set? Are you keeping/updating a database of outcomes that you try to achieve (e. g. complementary aspects of long-term wellbeing and security)?
How do you combine the estimates? Is there a function or conditional expression?
Considering the unparalleled computing power and breadth of prioritized knowledge of the human brain, would it be better to use intuition of the grant managers? Especially since the quantification depends on expert insights so does not avoid subjectivity.
Does the engagement of multiple evaluators reduce biases when quantifications are used? Human group dynamic navigation[1] in qualitative discussions can optimize for accuracy. Discussants perceive others’ expertise and make complex weighing of everyone’s’ perspectives, while developing these viewpoints.[2]
A relevant expressive statement can be: Don’t get tricked by AI, it is better to be human.
Individual quantitative estimates can be subject to ‘normative’ bias, where respondents reply by what is the most appropriate assuming static norms. Discussions can optimize for (collective) problem solving, notwithstanding traditional norms. This could also apply to evaluation of subjective outcomes, such as ‘has the grantee influenced expert X well?’
Who suggests the outcomes that are predicted? Are these taken from the application or from a predefined set? Are you keeping/updating a database of outcomes that you try to achieve (e. g. complementary aspects of long-term wellbeing and security)?
How do you combine the estimates? Is there a function or conditional expression?
Considering the unparalleled computing power and breadth of prioritized knowledge of the human brain, would it be better to use intuition of the grant managers? Especially since the quantification depends on expert insights so does not avoid subjectivity.
Does the engagement of multiple evaluators reduce biases when quantifications are used? Human group dynamic navigation[1] in qualitative discussions can optimize for accuracy. Discussants perceive others’ expertise and make complex weighing of everyone’s’ perspectives, while developing these viewpoints.[2]
A relevant expressive statement can be: Don’t get tricked by AI, it is better to be human.
if no threats are present
Individual quantitative estimates can be subject to ‘normative’ bias, where respondents reply by what is the most appropriate assuming static norms. Discussions can optimize for (collective) problem solving, notwithstanding traditional norms. This could also apply to evaluation of subjective outcomes, such as ‘has the grantee influenced expert X well?’