I’m currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
Ozzie Gooen
Flimsy Pet Theories, Enormous Initiatives
Downsides of Small Organizations in EA
Who is Uncomfortable Critiquing Who, Around EA?
Announcing Squiggle: Early Access
I previously gave a fair bit of feedback to this document. I wanted to quickly give my take on a few things.
Overall, I found the analysis interesting and useful. However, I overall have a somewhat different take than Nuno did.
On OP:
- Aaron Gertler / OP were given a previous version of this that was less carefully worded. To my surprise, he recommended going forward with publishing it, for the sake of community discourse. This surprised me and I’m really thankful.
- This analysis didn’t get me to change my mind much about Open Philanthropy. I thought fairly highly of them before and after, and expect that many others who have been around would think similarly. I think they’re a fair bit away from being an “idealized utilitarian agent” (in part because they explicitly claim not to be), but still much better than most charitable foundations and the like.On this particular issue:
- My guess is that in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public. It’s very common in large organizations for compromises to be made for various political or social reasons, for example. I’ve previously written a bit about similar things [here](https://twitter.com/ozziegooen/status/1456992079326978052).
- I think Nuno’s quantitative estimates were pretty interesting, but I wouldn’t be too surprised if other smart people would come up with numbers that are fairly different. For those reading this, I’d take the quantitative estimates with a lot of uncertainty.
- My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.On evaluation, more broadly:
- I’ve found OP to be a very intimidating target of critique or evaluation, mainly just because of their position. Many of us are likely to want funding from them in the future (or from people that listen to them), so the risk of getting people at OP upset is very high. From a cost-benefit position, publicly critiquing OP (or other high-status EA organizations) seems pretty risky. This is obviously unfortunate; these groups are often appreciative of feedback, and of course, they are some of the most useful groups to get feedback. (Sometimes prestigious EAs complain about getting too little feedback, I think this is one reason why).
- I really would hate for this post to be taken as “ammunition” by people with agendas against OP. I’m fairly paranoid about this. That wasn’t the point of this piece at all. If future evaluations are mainly used as “ammunition” by “groups with grudges”, then that makes it far more hazardous and costly to publish them. If we want lots of great evaluations, we’ll need an environment that doesn’t weaponize them.
- Similarly to the above point, I prefer these sorts of analysis and the resulting discussions to be fairly dispassionate and rational. When dealing with significant charity decisions I think it’s easy for some people to get emotional. “$200M could have saved X lives!”. But in the scheme of things, there are many decisions like this to make, and there will definitely be large mistakes made. Our main goals should be to learn quickly and continue to improve in our decisions going forward.
- One huge set of missing information is OP’s internal judgements of specific grants. I’m sure they’re very critical now of some groups they’ve previously funded (in all causes, not just criminal justice). However, it would likely be very awkward and unprofessional to actually release this information publicly.
- For many of the reasons mentioned above, I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
Announcing Squiggle Hub
Select Challenges with Criticism & Evaluation Around EA
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits
Disagreeables and Assessors: Two Intellectual Archetypes
Prioritization Research for Advancing Wisdom and Intelligence
13 Very Different Stances on AGI
Just want to flag that I’m really happy to see this. I think that the funding space could really use more labor/diversity now.
Some quick/obvious thoughts:- Website is pretty great, nice work there. I’m jealous of the speed/performance, kudos.
- I imagine some of this information should eventually be private to donors. Like, the medical expenses one.
- I’d want to eventually see Slack/Discord channels for each regrantor and their donors, or some similar setup. I think that communication between some regranters and their donors could be really good.
- I imagine some regranters would eventually work in teams. From being both on LTFF and seeing the FTX regrantor program, I did kind of like the LTFF policy of vote averaging. Personally, I think I do grantmaking best when working on a team. I think that the “regrantor” could be a “team leader”, in the sense that they could oversee people under them.
- As money amounts increase, I’d like to see regranters getting paid. It’s tough work. I think we could really use more part-time / full-time work here.
- I think if I were in charge of something like this, I’d have a back-office of coordinated investigations for everyone. Like, one full-time person who just gathers information about teams/people, and relays that to regranters.
- As I wrote about here, I’m generally a lot more enthusiastic about supporting sizeable organizations than tiny ones. I’d hope that this could be a good project to fund projects within sizeable organizations.
- I want to see more attention on reforming/improving the core aspects/community/bureaucracy of EA. These grantmakers seem very AI safety focused.
- Ideally it could be possible to have ratings/reviewers of how the regranters are to work with. Some grantmakers can be far more successful than others at delivering value to grantees and not being a pain to work with.
- I probably said this before, but I’m not very excited by Impact Certificates. More “traditional” grantmaking seems much better.
- One obvious failure mode is that regranters might not actually spend much of their money. It might be difficult to get good groups to apply. This is not easy work.
Good luck!
Why don’t governments seem to mind that companies are explicitly trying to make AGIs?
[Question] What new EA project or org would you like to see created in the next 3 years?
“which makes me think that it’s likely that Leverage at least for a while had a whole lot of really racist employees.”
“Leverage” seems to have employed at least 60 people at some time or another in different capacities. I’ve known several (maybe met around 15 or so), and the ones I’ve interacted with often seemed like pretty typical EAs/rationalists. I got the sense that there may have been few people there interested in the neoreactionary movement, but also got the impression the majority really weren’t.
I just want to flag that I really wouldn’t want EAs generally think that “people who worked at Leverage are pretty likely to be racist,” because this seems quite untrue and quite damaging. I don’t have much information about the complex situation that represents Leverage, but I do think that the sum of the people ever employed by them still holds a lot of potential. I’d really not want them to get or feel isolated from the rest of the community.
EA/Rationalist Safety Nets: Promising, but Arduous
EA could use better internal communications infrastructure
Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism (“DA”)
I like the choice to distill this into a specific cluster.
I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.
If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, “Democratic Altruism” to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves.
I imagine there would be a lot of work to really put forward a strong idea of what a larger “Democratic Altruism” would look like, and also, there would be a lengthy debate on its strengths and weaknesses.
Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.(That said, I imagine any name should come from the group advocating this vision)
I thank you for apologizing publicly and loudly. I imagine that you must be in a really tough spot right now.
I think I feel a bit conflicted on the way you presented this.
I treat our trust in FTX and dealings with him as bureaucratic failures. Whatever measures we had in place to deal with risks like this weren’t enough.
This specific post reads a bit to me like it’s saying, “We have some blog posts showing that we said these behaviors are bad, and therefore you could trust both that we follow these things and that we encourage others to, even privately.” I’d personally prefer it, in the future, if you wouldn’t focus on the blog posts and quotes. I think they just act as very weak evidence, and your use makes it feel a bit like otherwise.
Almost every company has lots of public documents outlining their commitments to moral virtues.
I feel pretty confident that you were ignorant of the fraud. I would like there to be more clarity of what sorts of concrete measures were in place to prevent situations like this, and what measures might change in the future to help make sure this doesn’t happen again.
There might also be many other concrete things that could be done to show your (and other senior people’s) care about these values.
Again, I appreciate the words, but if there’s one thing that the recent scandal taught us, it’s that it’s hard to take much from words. I don’t blame you here—but I would like us to have a culture where EAs can focus on evidence of credibility that’s much more high-signal than a list of previous altruistic writings.
All that said, I imagine that more rigorous evidence here will take more time.