Thanks so much for taking a deeper look at one of the articles! I think you’re right: a somewhat lower rating seems more appropriate in this case.
I believe that two things are true for the algorithm behind Actually Relevant: 1) almost all posts are more important for humanity than 90% of news articles by other outlets. In that sense, it’s already useful. 2) Many relevance analyses are still off by at least one grade on the rating scale, meaning that some posts get a “major” or “critical” tag that should not get it. The idea is to use community and expert feedback to finetune the prompts to get even better results in the future. I also want to involve a human editor who could double check and adjust dubious cases.
In the post you referenced, the AI says: “The eviction has affected over 70,000 people and risks cultural extinction for the Maasai people. It also highlights the need for a reevaluation of international legal norms and systems around land rights. In certain scenarios, this situation could lead to a broader movement for indigenous land rights in Tanzania and beyond, making it an issue that is far more relevant for humanity than the number of directly affected people would suggest.” I think it’s a good sign that the algorithm realized that the extinction of an entire culture and developments around indiginous land rights should lead to a higher rating than the number of directly affected people would suggest. It might still be off in this case, but I’m optimistic that additional finetuning can get us there.
Thanks so much for taking a deeper look at one of the articles! I think you’re right: a somewhat lower rating seems more appropriate in this case.
I believe that two things are true for the algorithm behind Actually Relevant: 1) almost all posts are more important for humanity than 90% of news articles by other outlets. In that sense, it’s already useful. 2) Many relevance analyses are still off by at least one grade on the rating scale, meaning that some posts get a “major” or “critical” tag that should not get it. The idea is to use community and expert feedback to finetune the prompts to get even better results in the future. I also want to involve a human editor who could double check and adjust dubious cases.
In the post you referenced, the AI says: “The eviction has affected over 70,000 people and risks cultural extinction for the Maasai people. It also highlights the need for a reevaluation of international legal norms and systems around land rights. In certain scenarios, this situation could lead to a broader movement for indigenous land rights in Tanzania and beyond, making it an issue that is far more relevant for humanity than the number of directly affected people would suggest.” I think it’s a good sign that the algorithm realized that the extinction of an entire culture and developments around indiginous land rights should lead to a higher rating than the number of directly affected people would suggest. It might still be off in this case, but I’m optimistic that additional finetuning can get us there.