I don’t think this passes a LT funding or talent bar but an idea I’ve been interested in for a while is a way for people to anonymously report sexual harassment or abuse, or possibly abuse in general*.
I haven’t thought much about implementation details, but I think the idea would be for the accused to not be exposed until there are >=3 reports or something, to reduce false positives and on the assumption that most abusers are serial abusers.
There are some technical nuances. Specifically, you want a way for the website to check for uniqueness of identities of people who report (so someone can’t create 30 fake accounts to report) while not exposing the identities to outsiders. It might also be good for the website to not store the identities of accusers in the backend either, for obvious security concerns. You can solve this with a number of privacy techniques (the most obvious that I can think of is saving a hash of people’s Facebook unique IDs, though of course this isn’t great. There might be a better off-the-shelf solution however).
*One use case I’m tangentially familiar with is abuse of power from PhD supervisors.
EDIT: decided to retract this comment because the space of potential altruistic projects is extremely wide, and even though I’m inside-view more excited about this project than many others, it still seems like a bad norm to suggest things for an EA audience that even I don’t think would be competitive with top LT projects.
I don’t think this passes a LT funding or talent bar but an idea I’ve been interested in for a while is a way for people to anonymously report sexual harassment or abuse, or possibly abuse in general*.
I haven’t thought much about implementation details, but I think the idea would be for the accused to not be exposed until there are >=3 reports or something, to reduce false positives and on the assumption that most abusers are serial abusers.
There are some technical nuances. Specifically, you want a way for the website to check for uniqueness of identities of people who report (so someone can’t create 30 fake accounts to report) while not exposing the identities to outsiders. It might also be good for the website to not store the identities of accusers in the backend either, for obvious security concerns. You can solve this with a number of privacy techniques (the most obvious that I can think of is saving a hash of people’s Facebook unique IDs, though of course this isn’t great. There might be a better off-the-shelf solution however).
*One use case I’m tangentially familiar with is abuse of power from PhD supervisors.
EDIT: decided to retract this comment because the space of potential altruistic projects is extremely wide, and even though I’m inside-view more excited about this project than many others, it still seems like a bad norm to suggest things for an EA audience that even I don’t think would be competitive with top LT projects.