Separately from the FTX issue, I’d be curious about you dissecting what of Zoe’s ideas you think are worth implementing and what would be worse and why.
My takes:
Set up whistleblower protection schemes for members of EA organisations ⇒ seems pretty good if there is a public commitment from an EA funder to something like “if you whistleblow we’ll cover your salary if you are fired while you search another job” or something like that
Transparent listing of funding sources on each website of each institution ⇒ Seems good to keep track of who receives money from who
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough, though I don’t have great insight on grantgiving institutions
Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% ⇒ this seems bad for incentives and complicated to put into action
Within 5 years: EA funding decisions are made collectively ⇒ seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known ⇒ Meh, I’m indifferent since I just don’t consume that kind of content so I don’t know the effects it has, though I am erring towards it being somewhat good to give voice to others
Increase transparency over
Who gets accepted/rejected to EAG and why ⇒ seems hard to implement, though there could be some model letters or something
leaders/coordination forum ⇒ I don’t sense this forum is nowhere as important as these recommendations imply
Set up: ‘Online forum of concerns’ ⇒ seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me.
On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it’s very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix.
As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation).
I think it’s possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it’s better to not do anything here than to not try exceptionally hard at making this go well).
It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies. I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies. Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?
In practice, I notice two problems. Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage. Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough [my emphasis], though I don’t have great insight on grantgiving institutions
Separately from the FTX issue, I’d be curious about you dissecting what of Zoe’s ideas you think are worth implementing and what would be worse and why.
My takes:
Set up whistleblower protection schemes for members of EA organisations ⇒ seems pretty good if there is a public commitment from an EA funder to something like “if you whistleblow we’ll cover your salary if you are fired while you search another job” or something like that
Transparent listing of funding sources on each website of each institution ⇒ Seems good to keep track of who receives money from who
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough, though I don’t have great insight on grantgiving institutions
Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% ⇒ this seems bad for incentives and complicated to put into action
Within 5 years: EA funding decisions are made collectively ⇒ seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known ⇒ Meh, I’m indifferent since I just don’t consume that kind of content so I don’t know the effects it has, though I am erring towards it being somewhat good to give voice to others
Increase transparency over
Who gets accepted/rejected to EAG and why ⇒ seems hard to implement, though there could be some model letters or something
leaders/coordination forum ⇒ I don’t sense this forum is nowhere as important as these recommendations imply
Set up: ‘Online forum of concerns’ ⇒ seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me.
On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it’s very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix.
As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation).
I think it’s possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it’s better to not do anything here than to not try exceptionally hard at making this go well).
It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies. I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies. Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?
In practice, I notice two problems. Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage. Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.
Not always!