Since FTX failure greatly touched EA community, there is a high chance for EA to increasingly add a lot of regulation and become quite a bureaucracy as time goes on. Implementing new rules and oversight is the usual go-to way of solving problems (like in finance, medicine, aviation). But established regulation are expensive to manage, very hard to change and it greatly slows down innovation. I am in favor of it, but since it is only the beginning, maybe it is very effective not to be too fast entangling ourselves in it?
Could more effective measures be found instead of ever more bureaucracy? For example, could normalizing action of whistleblowing be an answer? In particular, I propose extreme whistleblowing of tiny wrongdoings, as a thought experiment. If done right, it could reveal existing issues or prevent new shady stuff from slowly emerging in the future.
How could whistleblowing system work?
It may be a tool or process.
It would be well advertisedand frequently used by everyone to report tiny wrongdoings of community members.
Tiny reports would accumulate to create clearer picture about consistent behavior of individuals.
Reports may be highly simplified to encourage people to use the system. For example, one can rate people interactions with basic categories and feeling in many cases (like rude, intolerant, hateful, risky...).
Reports would not be anonymous in order to be verifiable and accurately countable.
Reports would be accessed by most trusted organizations like CEA which would also need to become more trusted. For example, it may have to strengthen data protection multiple times which I may guess is needed anyway (like for all organizations).
Individuals should have a right to receive all anonymized data gathered about them in order to have an option for a peace of mind.
Reports would have an automatic expiration date (scheduled removal after some years) to have an option for individuals to change their behavior.
It has to be decided what are non issues as usual, so it would not hamper expression of ideas or creating other side effects which I will continue below.
Benefits:
This system would repel people from poor actions. But if someone is unable to abstain from doing shady stuff, they may feel repelled from being part of the community itself.
People who create issues would be spotted faster and their work stopped before it cause significant negative impact. Reporting sooner may prevent bigger things from escalating.
It it works, this might prevent establishing other less effective (more resource intensive) measures.
This would help examine applications for roles, events and grants.
Counterarguments:
People behavior are very complex so tiny mishaps may not be representative of person’s character. But if we make evaluation instructions well known and agreed by the community, plus making sure it is sensitive to nuance, then we could expect higher evaluation quality.
Reporting is not acceptable in society (culture varies across countries), so it would be unpleasant to report other people, especially about tiny matters. Butif we establish knowledge it is good for community, culture might change?
If tiny wrongdoings are common in the community, then this idea would face a lot of resistance. On the other hand, the more resistance there is, the more such a system might be needed. By the end of the day, the idea is not to punish, but to bring issues to light. If issues are known, they can be fixed. Fixing is the end goal.
Tiny wrongdoing is impossible to prevent or sometimes agree what that is. So the goal is not to pursue tiny things, but to gather enough clues to assemble larger picture if there is anything larger to be assembled.
EA already have similar processes, but it could be improved as number of actors in community grows.
I am unsure if it would create more trust environment (this is desirable), or fear environment (this is undesirable). Maybe it is the question of how far and how well this would be implemented.
What are other reasons for this not to work?
For people who enjoy movies, the film “The Whistleblower (2010)”is a fitting example displaying very disturbing corruption happening on a massive scale in United Nations mission in Bosnia, where almost everybody is turning a blind eye, because it does not fit their or their organization’s interest, or because corruption slowly grew to hard to admit or manage levels (movie based on real facts).
Whistleblowing tiny wrongdoings to prevent higher level fraud
Since FTX failure greatly touched EA community, there is a high chance for EA to increasingly add a lot of regulation and become quite a bureaucracy as time goes on. Implementing new rules and oversight is the usual go-to way of solving problems (like in finance, medicine, aviation). But established regulation are expensive to manage, very hard to change and it greatly slows down innovation. I am in favor of it, but since it is only the beginning, maybe it is very effective not to be too fast entangling ourselves in it?
Could more effective measures be found instead of ever more bureaucracy? For example, could normalizing action of whistleblowing be an answer? In particular, I propose extreme whistleblowing of tiny wrongdoings, as a thought experiment. If done right, it could reveal existing issues or prevent new shady stuff from slowly emerging in the future.
How could whistleblowing system work?
It may be a tool or process.
It would be well advertised and frequently used by everyone to report tiny wrongdoings of community members.
Tiny reports would accumulate to create clearer picture about consistent behavior of individuals.
Reports may be highly simplified to encourage people to use the system. For example, one can rate people interactions with basic categories and feeling in many cases (like rude, intolerant, hateful, risky...).
Reports would not be anonymous in order to be verifiable and accurately countable.
Reports would be accessed by most trusted organizations like CEA which would also need to become more trusted. For example, it may have to strengthen data protection multiple times which I may guess is needed anyway (like for all organizations).
Individuals should have a right to receive all anonymized data gathered about them in order to have an option for a peace of mind.
Reports would have an automatic expiration date (scheduled removal after some years) to have an option for individuals to change their behavior.
It has to be decided what are non issues as usual, so it would not hamper expression of ideas or creating other side effects which I will continue below.
Benefits:
This system would repel people from poor actions. But if someone is unable to abstain from doing shady stuff, they may feel repelled from being part of the community itself.
People who create issues would be spotted faster and their work stopped before it cause significant negative impact. Reporting sooner may prevent bigger things from escalating.
It it works, this might prevent establishing other less effective (more resource intensive) measures.
This would help examine applications for roles, events and grants.
Counterarguments:
People behavior are very complex so tiny mishaps may not be representative of person’s character. But if we make evaluation instructions well known and agreed by the community, plus making sure it is sensitive to nuance, then we could expect higher evaluation quality.
Reporting is not acceptable in society (culture varies across countries), so it would be unpleasant to report other people, especially about tiny matters. But if we establish knowledge it is good for community, culture might change?
If tiny wrongdoings are common in the community, then this idea would face a lot of resistance. On the other hand, the more resistance there is, the more such a system might be needed. By the end of the day, the idea is not to punish, but to bring issues to light. If issues are known, they can be fixed. Fixing is the end goal.
Tiny wrongdoing is impossible to prevent or sometimes agree what that is. So the goal is not to pursue tiny things, but to gather enough clues to assemble larger picture if there is anything larger to be assembled.
EA already have similar processes, but it could be improved as number of actors in community grows.
I am unsure if it would create more trust environment (this is desirable), or fear environment (this is undesirable). Maybe it is the question of how far and how well this would be implemented.
What are other reasons for this not to work?
For people who enjoy movies, the film “The Whistleblower (2010)” is a fitting example displaying very disturbing corruption happening on a massive scale in United Nations mission in Bosnia, where almost everybody is turning a blind eye, because it does not fit their or their organization’s interest, or because corruption slowly grew to hard to admit or manage levels (movie based on real facts).