(How) Is technical AI Safety research being evaluated?

Technical AI safety research seems to get a lot of attention and funding in the EA community. EA was born out of effective giving evaluations considering topics like expected value, marginal impact, etc. With more longtermism-type fields like technical AI safety, there is probably not as much clear data available about the impact, but one still ought to make good estimates whether investment of careers and money into technical AI safety comes anything close to malaria bednets.

Thus, this question is specifically asking about how we do or if we can evaluate technical AI safety research work that is being done. Are there orgs/​people that look at how much funding EA is pouring into technical AI safety and what the outcomes of this are? Is there such a thing as cost-benefit analysis or monitoring and evaluation for technical AI safety research?