I think that basically all of these are being pursued and many are good ideas. I would be less put off if the post title was ‘More people should work on aligning profit incentives with alignment research’, but suggesting that no one is doing this seems off base.
This is what I got after a few minutes of Google search (not endorsing any of the links beyond that they are claiming to do the thing described).
I think that basically all of these are being pursued and many are good ideas. I would be less put off if the post title was ‘More people should work on aligning profit incentives with alignment research’, but suggesting that no one is doing this seems off base.
This is what I got after a few minutes of Google search (not endorsing any of the links beyond that they are claiming to do the thing described).
AI Auditing:
https://www.unite.ai/how-to-perform-an-ai-audit-in-2023/
Model interpretability:
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability?view=azureml-api-2
Monitoring and usage:
https://www.walkme.com/lpages/shadow-ai/?t=1&PcampId=7014G000001ya0pQAA&camp=it_shadow-ai_namer&utm_source=it_shadow_ai&utm_medium=paid-search_google&utm_content=walkme_ai&utm_campaign=it_shadow-ai_namer&utm_term=paid-media&gclid=Cj0KCQjw3JanBhCPARIsAJpXTx69aVdhkJkHOpEQd4_Bfpp_9_93hQM8NVTWkfZU8eR15VU--34lCKMaAkUUEALw_wcB
Future Endowment Fund sounds a lot like an impact certificate:
https://forum.effectivealtruism.org/posts/4bPjDbxkYMCAdqPCv/manifund-impact-market-mini-grants-round-on-forecasting