I don’t know how to understand ‘the space of all possible intelligent algorithms’ as a statistical relationship without imagining it populated with actual instances
Not my field, but my understanding is that using the uniform prior is pretty normal/common for theoretical CS.
Even if you think a uniform prior has zero information, which is a disputed position in philosophy, we have lots of information to update with here. eg that programmers will want AI systems to have certain motivations, that they won’t want to be killed etc.
Not my field, but my understanding is that using the uniform prior is pretty normal/common for theoretical CS.
What do you mean by “uniform prior” here?
Even if you think a uniform prior has zero information, which is a disputed position in philosophy, we have lots of information to update with here. eg that programmers will want AI systems to have certain motivations, that they won’t want to be killed etc.