The methodological diversity necessary to get any consilience in highly abstract areas makes it very hard for donors to evaluate such projects. Many of the ideas that form the basis of the AI memeplex, for instance, came from druggy-artist-scientists originally. So what happens in practice is that this stuff revolves around smoking gun type highly legible philosophical arguments, even though we know this is more hedgehog than fox, and that this guarantees we’ll only, on average, prepare for dangers that large numbers of people can comprehend.
Concretely: the more money you have, the higher the variance on weird projects you should be funding. If the entire funding portfolio of the Gates’ foundation are things almost everyone thinks sound like good ideas, that’s a failure. It’s understandable for small donors. You don’t want to ‘waste’ all your money only to have nothing you fund work. But if you have a 10 billion and thus need to spend 500 million to 1 billion a year just to not grow your fund, you should be spending a million here and there on things most people think are crazy (how quickly we forget concrete instances like initial responses to the shrinking objects to nanoscale idea?). This is fairly straightforward porting of reasoning from startup land.