Regardless of whether my proposed interventions work or fail, there would be no evidence for it.
I guess this is maybe true for some strict definition of “evidence”, but I would find these suggestions much more helpful if they came with:
More concreteness. E.g. what things do you think organizations should be transparent about? Is it just that you think grantmakers should publish grant writeups more quickly?
Actual calculations of trade-offs. E.g. how many additional hours of labor would it take to be transparent in the way that you suggest? What are the actual odds that this transparency results in getting suggestions that improve the organization? Can you make a BOTEC which quantifies the benefits here?
Specific examples of how these suggestions would have been helpful in the past. E.g. are there historical instances of corruption that your transparency proposal would have caught? How valuable would this have been?
Right now I can’t even tell[1] if I’m one of the people you’re criticizing (maybe my work is as transparent as you want, I don’t know) much less whether I agree with your suggestions.
(Note: it’s obviously way more expensive to do what I suggest than to just briefly list your suggestions. But my guess is that it would be substantially more impactful to go into one of these in detail than to give this current high-level list.)
Asking individuals to quantify such benefits seems like a de facto way of not actually considering them—individuals very rarely have time to do a thorough job, and any work they publish will be inevitably speculative, and easy enough to criticise on the margins that orgs that don’t want to change their behaviour will be able to find a reason not to.
Since EA orgs’ lack of transparency is a widespread concern among EAs, it seems a reasonable use of resources for EA orgs that don’t think it’s worth it to produce a one-off (or perhaps once-every-n-years) report giving their own reasons as to why it isn’t. Then the community as a whole can discuss the report, and if the sentiment is broadly positive the org can confidently go on as they are, and if there’s a lot of pushback on it, a) the org might choose to listen and change its practices and b) if they don’t, it will at least be more evident that they’ve explicitly chosen not to heed the community’s views, which I’d hope would guide them towards more caution, and gradually separate the visionaries from the motivated reasoners.
Another option would be a one-time or periodic “EA Governance and Transparency Red Teaming Contest” with volunteer judges who were not affiliated with the large meta organizations. I do not think a six-figure prize fund would be necessary; to be honest, a major purpose of there being a prize fund for this contest would be to credibly signal to would-be writers that the organizations are seriously interested in ideas about improving governance and transparency.
To build off of what you said, it’s really hard for people to feel motivated to do even a moderately thorough job on a proposal or a cost-effectiveness analysis without a credible signal that there is a sufficient likelihood that the organization(s) in question will actually be responsive to a proposal/analysis. Right now, it would feel like sending an unsolicited grant proposal to an organization that doesn’t list your cause area as one of its interests and has not historically funded in that area. At least in that example, the author potentially stands to gain from a grant acceptance, while the author of a governance/transparency proposal benefits no more than any other member of the community.
I mean, I don’t even know what the claim is that I’m supposed to produce a report giving my own reasons for. I guess the answer is “nothing.”
(Which obviously is fine! Not all forum posts need to be targeted at getting me to change my behavior. In fact, almost none are. But I thought I might have been in the target audience, so hence I wrote the comment.)
By transparency, I mean publishing explanations behind important decisions much more regularly and quickly to the EA Forum. This is mostly relevant for grantmakers and grantmaking organisations and isn’t super relevant for your role.
But for example, if you made a decision behind a big change to the karma system on the EA Forum, I would like you to publish an explanation behind your decision for the sake of transparency.
Agree that this would be better but as you say it is obviously very time consuming. I (ironically) don’t really have capacity soon to do this, but would encourage others to have a go at some BOTECs related to this post.
I’m not aware of any examples of outright corruption in EA.
I think an example of the kind of decision for which reasoning should be published on the EA Forum is when 80 000 hours starts listing multiple jobs in a new organisation on its job board. Doing this for OpenAI might have led to earlier scrutiny.
Another example might be the Wytham Abbey purchase but I’m not sure how much time had passed between the purchase and the discussion on this forum.
(This meta-analysis (https://journals.sagepub.com/doi/full/10.1177/00208523211033236) suggests that transparency has a small effect on government corruption, but I would not put too much weight on the results since effects seem to be context specific and I’m not sure how much we can extrapolate from governments to a network of organisations. )
While I don’t think it would be that difficult to write up a BOTEC on the costs side (e.g., here are some ways EA Funds could be more transparent and I estimate the cost of the package as $50K over five years), quantifying benefits for this kind of thing seems awfully difficult. For instance, I could point to some posts on the forum as evidence that some people are bothered by what they perceive as inadequate transparency, and might be reasonably expected to donate less, not apply / get disillusioned, etc. My sense is that is true of quite a bit in the meta space, and am not sure it is reasonable to expect transparency spend to quantify in a clean manner if similar spends aren’t held to the same standard.
Thanks for sharing these!
I guess this is maybe true for some strict definition of “evidence”, but I would find these suggestions much more helpful if they came with:
More concreteness. E.g. what things do you think organizations should be transparent about? Is it just that you think grantmakers should publish grant writeups more quickly?
Actual calculations of trade-offs. E.g. how many additional hours of labor would it take to be transparent in the way that you suggest? What are the actual odds that this transparency results in getting suggestions that improve the organization? Can you make a BOTEC which quantifies the benefits here?
Specific examples of how these suggestions would have been helpful in the past. E.g. are there historical instances of corruption that your transparency proposal would have caught? How valuable would this have been?
Right now I can’t even tell[1] if I’m one of the people you’re criticizing (maybe my work is as transparent as you want, I don’t know) much less whether I agree with your suggestions.
(Note: it’s obviously way more expensive to do what I suggest than to just briefly list your suggestions. But my guess is that it would be substantially more impactful to go into one of these in detail than to give this current high-level list.)
See also EA should taboo “EA should”
Asking individuals to quantify such benefits seems like a de facto way of not actually considering them—individuals very rarely have time to do a thorough job, and any work they publish will be inevitably speculative, and easy enough to criticise on the margins that orgs that don’t want to change their behaviour will be able to find a reason not to.
Since EA orgs’ lack of transparency is a widespread concern among EAs, it seems a reasonable use of resources for EA orgs that don’t think it’s worth it to produce a one-off (or perhaps once-every-n-years) report giving their own reasons as to why it isn’t. Then the community as a whole can discuss the report, and if the sentiment is broadly positive the org can confidently go on as they are, and if there’s a lot of pushback on it, a) the org might choose to listen and change its practices and b) if they don’t, it will at least be more evident that they’ve explicitly chosen not to heed the community’s views, which I’d hope would guide them towards more caution, and gradually separate the visionaries from the motivated reasoners.
Another option would be a one-time or periodic “EA Governance and Transparency Red Teaming Contest” with volunteer judges who were not affiliated with the large meta organizations. I do not think a six-figure prize fund would be necessary; to be honest, a major purpose of there being a prize fund for this contest would be to credibly signal to would-be writers that the organizations are seriously interested in ideas about improving governance and transparency.
To build off of what you said, it’s really hard for people to feel motivated to do even a moderately thorough job on a proposal or a cost-effectiveness analysis without a credible signal that there is a sufficient likelihood that the organization(s) in question will actually be responsive to a proposal/analysis. Right now, it would feel like sending an unsolicited grant proposal to an organization that doesn’t list your cause area as one of its interests and has not historically funded in that area. At least in that example, the author potentially stands to gain from a grant acceptance, while the author of a governance/transparency proposal benefits no more than any other member of the community.
I mean, I don’t even know what the claim is that I’m supposed to produce a report giving my own reasons for. I guess the answer is “nothing.”
(Which obviously is fine! Not all forum posts need to be targeted at getting me to change my behavior. In fact, almost none are. But I thought I might have been in the target audience, so hence I wrote the comment.)
I think the suggestion is something like this (I am elaborating a bit)-- certain organizations should consider producing a report that explains:
(1) How their organization displays good governance, accountability, and transparency (“GAT”);
(2) Why the organization believes its current level of GAT is sufficient under the circumstances; and possibly
(3) Why the organization believes that future improvements in GAT that might be considered would not be cost-effective / prudent / advisible.
Of course, if the organization thought it should improve its GAT, it could say that instead.
(3) would probably need a crowdsourced list of ideas and a poll on which ones the community was most interested in.
Thanks for your comment!
By transparency, I mean publishing explanations behind important decisions much more regularly and quickly to the EA Forum. This is mostly relevant for grantmakers and grantmaking organisations and isn’t super relevant for your role.
But for example, if you made a decision behind a big change to the karma system on the EA Forum, I would like you to publish an explanation behind your decision for the sake of transparency.
Agree that this would be better but as you say it is obviously very time consuming. I (ironically) don’t really have capacity soon to do this, but would encourage others to have a go at some BOTECs related to this post.
I’m not aware of any examples of outright corruption in EA.
I think an example of the kind of decision for which reasoning should be published on the EA Forum is when 80 000 hours starts listing multiple jobs in a new organisation on its job board. Doing this for OpenAI might have led to earlier scrutiny.
Another example might be the Wytham Abbey purchase but I’m not sure how much time had passed between the purchase and the discussion on this forum.
I think a great example of transparency was this post (https://forum.effectivealtruism.org/posts/4JF39v548SETuMewp/?commentId=R2Axqfvbyq89fSRYQ) from the EAG organisers explaining why they’re making a set of changes to EAG, allowing scrutiny from the EA community.
(This meta-analysis (https://journals.sagepub.com/doi/full/10.1177/00208523211033236) suggests that transparency has a small effect on government corruption, but I would not put too much weight on the results since effects seem to be context specific and I’m not sure how much we can extrapolate from governments to a network of organisations. )
While I don’t think it would be that difficult to write up a BOTEC on the costs side (e.g., here are some ways EA Funds could be more transparent and I estimate the cost of the package as $50K over five years), quantifying benefits for this kind of thing seems awfully difficult. For instance, I could point to some posts on the forum as evidence that some people are bothered by what they perceive as inadequate transparency, and might be reasonably expected to donate less, not apply / get disillusioned, etc. My sense is that is true of quite a bit in the meta space, and am not sure it is reasonable to expect transparency spend to quantify in a clean manner if similar spends aren’t held to the same standard.