I’m currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
Ozzie Gooen
I previously gave a fair bit of feedback to this document. I wanted to quickly give my take on a few things.
Overall, I found the analysis interesting and useful. However, I overall have a somewhat different take than Nuno did.
On OP:
- Aaron Gertler / OP were given a previous version of this that was less carefully worded. To my surprise, he recommended going forward with publishing it, for the sake of community discourse. This surprised me and I’m really thankful.
- This analysis didn’t get me to change my mind much about Open Philanthropy. I thought fairly highly of them before and after, and expect that many others who have been around would think similarly. I think they’re a fair bit away from being an “idealized utilitarian agent” (in part because they explicitly claim not to be), but still much better than most charitable foundations and the like.On this particular issue:
- My guess is that in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public. It’s very common in large organizations for compromises to be made for various political or social reasons, for example. I’ve previously written a bit about similar things [here](https://twitter.com/ozziegooen/status/1456992079326978052).
- I think Nuno’s quantitative estimates were pretty interesting, but I wouldn’t be too surprised if other smart people would come up with numbers that are fairly different. For those reading this, I’d take the quantitative estimates with a lot of uncertainty.
- My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.On evaluation, more broadly:
- I’ve found OP to be a very intimidating target of critique or evaluation, mainly just because of their position. Many of us are likely to want funding from them in the future (or from people that listen to them), so the risk of getting people at OP upset is very high. From a cost-benefit position, publicly critiquing OP (or other high-status EA organizations) seems pretty risky. This is obviously unfortunate; these groups are often appreciative of feedback, and of course, they are some of the most useful groups to get feedback. (Sometimes prestigious EAs complain about getting too little feedback, I think this is one reason why).
- I really would hate for this post to be taken as “ammunition” by people with agendas against OP. I’m fairly paranoid about this. That wasn’t the point of this piece at all. If future evaluations are mainly used as “ammunition” by “groups with grudges”, then that makes it far more hazardous and costly to publish them. If we want lots of great evaluations, we’ll need an environment that doesn’t weaponize them.
- Similarly to the above point, I prefer these sorts of analysis and the resulting discussions to be fairly dispassionate and rational. When dealing with significant charity decisions I think it’s easy for some people to get emotional. “$200M could have saved X lives!”. But in the scheme of things, there are many decisions like this to make, and there will definitely be large mistakes made. Our main goals should be to learn quickly and continue to improve in our decisions going forward.
- One huge set of missing information is OP’s internal judgements of specific grants. I’m sure they’re very critical now of some groups they’ve previously funded (in all causes, not just criminal justice). However, it would likely be very awkward and unprofessional to actually release this information publicly.
- For many of the reasons mentioned above, I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
Just want to flag that I’m really happy to see this. I think that the funding space could really use more labor/diversity now.
Some quick/obvious thoughts:- Website is pretty great, nice work there. I’m jealous of the speed/performance, kudos.
- I imagine some of this information should eventually be private to donors. Like, the medical expenses one.
- I’d want to eventually see Slack/Discord channels for each regrantor and their donors, or some similar setup. I think that communication between some regranters and their donors could be really good.
- I imagine some regranters would eventually work in teams. From being both on LTFF and seeing the FTX regrantor program, I did kind of like the LTFF policy of vote averaging. Personally, I think I do grantmaking best when working on a team. I think that the “regrantor” could be a “team leader”, in the sense that they could oversee people under them.
- As money amounts increase, I’d like to see regranters getting paid. It’s tough work. I think we could really use more part-time / full-time work here.
- I think if I were in charge of something like this, I’d have a back-office of coordinated investigations for everyone. Like, one full-time person who just gathers information about teams/people, and relays that to regranters.
- As I wrote about here, I’m generally a lot more enthusiastic about supporting sizeable organizations than tiny ones. I’d hope that this could be a good project to fund projects within sizeable organizations.
- I want to see more attention on reforming/improving the core aspects/community/bureaucracy of EA. These grantmakers seem very AI safety focused.
- Ideally it could be possible to have ratings/reviewers of how the regranters are to work with. Some grantmakers can be far more successful than others at delivering value to grantees and not being a pain to work with.
- I probably said this before, but I’m not very excited by Impact Certificates. More “traditional” grantmaking seems much better.
- One obvious failure mode is that regranters might not actually spend much of their money. It might be difficult to get good groups to apply. This is not easy work.
Good luck!
“which makes me think that it’s likely that Leverage at least for a while had a whole lot of really racist employees.”
“Leverage” seems to have employed at least 60 people at some time or another in different capacities. I’ve known several (maybe met around 15 or so), and the ones I’ve interacted with often seemed like pretty typical EAs/rationalists. I got the sense that there may have been few people there interested in the neoreactionary movement, but also got the impression the majority really weren’t.
I just want to flag that I really wouldn’t want EAs generally think that “people who worked at Leverage are pretty likely to be racist,” because this seems quite untrue and quite damaging. I don’t have much information about the complex situation that represents Leverage, but I do think that the sum of the people ever employed by them still holds a lot of potential. I’d really not want them to get or feel isolated from the rest of the community.
Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism (“DA”)
I like the choice to distill this into a specific cluster.
I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.
If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, “Democratic Altruism” to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves.
I imagine there would be a lot of work to really put forward a strong idea of what a larger “Democratic Altruism” would look like, and also, there would be a lengthy debate on its strengths and weaknesses.
Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.(That said, I imagine any name should come from the group advocating this vision)
I don’t really use the word myself (at least, I don’t remember using it), but I sometimes do say things like “intense utilitarian” or “intense worker.”
I’d vote against “Drank the kool-aid EAs.” It’s a super dark metaphor; of an altruistic sect that turned into a cult and committed mass suicide. I get that it’s joking, but it feels like a bit much for me.
https://en.wikipedia.org/wiki/Drinking_the_Kool-Aid
The phrase originates from events in Jonestown, Guyana, on November 18, 1978, in which over 900 members of the Peoples Temple movement died. The movement’s leader, Jim Jones, called a mass meeting at the Jonestown pavilion after the murder of U.S. Congressman Leo Ryan and others in nearby Port Kaituma. Jones proposed “revolutionary suicide” by way of ingesting a powdered drink mix lethally laced with cyanide and other drugs which had been prepared by his aides.
I might be able to provide a bit of context:
I think the devil is really in the details here. I think there are some reasonable versions of this.
The big question is why and how you’re criticizing people, and what that reveals about your beliefs (and what those beliefs are).
As an extreme example, imagine if a trusted researcher came out publicly, saying,
”EA is a danger to humanity because it’s stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA down.”
If I were a funder, and I were funding researchers, I’d be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.
It’s possible to use criticism to improve a field or try to destroy it.
I’m a big fan of positive criticism, but think that some kinds of criticism can be destructive (see a lot of politics, for example)
I know less about this certain circumstance, I’m just pointing out how the other side would see it.
Thanks for the post! Much of it resonated with me.
A few quick thoughts:
1. I could see some reads of this being something like, “EA researchers are doing a bad job and should feel bad.” I wouldn’t agree with this (mainly the latter bit) and assume the author wouldn’t either. Lots of EAs I know seem to be doing about the best that they know of and have a lot of challenges they are working to overcome.
2. I’ve had some similar frustrations over the last few years. I think that there is a fair bit of obvious cause prioritization research to be done that’s getting relatively little attention. I’m not as confident as you seem to be about this, but agree it seems to be an issue.3. I would categorize many of the issues as being systematic between different sectors. I think significant effort in these areas would require bold efforts with significant human and financial capital, and these clusters are rare. Right now the funding situation is still quite messy for ventures outside the core OpenPhil cause areas.
I could see an academic initiative taking some of them on, but that would be a significant undertaking from at least one senior academic who may have to take a major risk to do so. Right now we have a few senior academics who led/created the existing main academic/EA clusters, and these projects were very tied to the circumstances of the senior people.
If you want a job in Academia, it’s risky to do things outside the common tracks, and if you want one outside of Academia, it’s often riskier. One in-between is making new small nonprofits. This is also a significant undertaking however. The funding situation for small ongoing efforts is currently quite messy; these are often too small for OpenPhil but too big for EA funds.
4. One reason why funding is messy is because it’s thought that groups doing a bad job at these topics could be net negative. Thus, few people are trusted to lead important research in new areas that are core to EA. This could probably be improved with significantly more vetting, but this takes a lot of time. Now that I think about it, OpenPhil has very intensive vetting for their hires, and these are just hires; after they are hired they get managers and can be closely worked with. If a funder funds a totally new research initiative, they will have a vastly lower amount of control (or understanding) over it than organizations do over their employees. Right now we don’t have organizations around who can do near hiring-level amounts of funding for small initiatives, perhaps we should though.
5. We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding. Right now a whole lot of great ones are focused on AI (this often requires many years of grad school or training) and Animals. My impression is that on the margin, moving some people from these fields to other fields (cause prioritization or experimental new things) could be good, though a big change to several individuals.
6. It seems really difficult to convince committed researchers to change fields. They often have taken years to develop expertise, connections, and citations, so changing that completely is very costly. An alternative is to focus on young, new people, but those people take a while to mature as researchers.
In EA we just don’t have many “great generic researchers” who we can reassign from one topic to something very different on short notice. More of this seems great to me, but it’s tricky to setup and attract talent for.
7. I think it’s possible that older/experienced researchers don’t want to change careers, and new ones aren’t trusted with funding. Looking back I’m quite happy that Ellie and Holden started GiveWell without feeling like they needed to work in an existing org for 4 years first. I’m not sure what to do here, but would like to see more bets on smart young people.
8. I think there are several interesting “gaps” in EA and am sure that most others would agree. Solving them is quite challenging, it could require a mix of coordination, effort, networking, and thinking. I’d love to see some senior people try to do work like this full-time. In general I’d love for see more “EA researcher/funding coordination”, that seems like the root of a lot of our problems.
9. I think Rethink Priorities has a pretty great model and could be well suited to these kinds of problems. My impression is funding has been a bottleneck for them. I think that Peter may respond to this, so can do so directly. If there are funders out there who are excited to fund any of the kinds of work described in this article, I’d suggest reaching out to Rethink Priorities and seeing if they could facilitate that. They would be my best bet for that kind of arrangement at the moment.
10. Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I’m working on), but it will take some time, and obviously aren’t direct work on the issue.
Personal reflections on self-worth and EA
My sense of self-worth often comes from guessing what people I respect think of me and my work.
In EA… this is precarious. The most obvious people to listen to are the senior/powerful EAs.
In my experience, many senior/powerful EAs I know:
1. Are very focused on specific domains.
2. Are extremely busy.
3. Have substantial privileges (exceptionally intelligent, stable health, esteemed education, affluent/ intellectual backgrounds.)
4. Display limited social empathy (ability to read and respond to the emotions of others)
5. Sometimes might actively try not to sympathize/empathize with many people, because they are judging them for grants, and want don’t want to be biased. (I suspect this is the case for grantmakers).
6. Are not that interested in acting as a coach/mentor/evaluator to people outside their key areas/organizations.
7. Don’t intend or want others to care too much about what they think outside of cause-specific promotion and a few pet ideas they want to advance.A parallel can be drawn with the world of sports. Top athletes can make poor coaches. Their innate talent and advantages often leave them detached from the experiences of others. I’m reminded by David Foster Wallace’s How Tracy Austin Broke My Heart.
If you’re a tennis player, tying your self-worth to what Roger Federer thinks of you is not wise. Top athletes are often egotistical, narrow-minded, and ambivalent to others. This sort of makes sense by design—to become a top athlete, you often have to obsess over your own abilities to an unnatural extent for a very long period.
Good managers are sometimes meant to be better as coaches than they are as direct contributors. In EA, I think those in charge seem more like “top individual contributors and researchers” than they do “top managers.” Many actively dislike management or claim that they’re not doing management. (I believe funders typically don’t see their work as “management*”, which might be very reasonable.)
But that said, even a good class of managers wouldn’t fully solve the self-worth issue. Tying your self-worth too much to your boss can be dangerous—your boss already has much power and control over you, so adding your self-worth to the mix seems extra precarious.
I think if I were to ask any senior EA I know, “Should I tie my self-worth with your opinion of me?” they would say something like,“Are you insane? I barely know you or your work. I can’t at all afford the time to evaluate your life and work enough to form an opinion that I’d suggest you take really seriously.”
They have enough problems—they don’t want to additionally worry about others trying to use them as judges of personal value.
But this raises the question, Who, if anyone, should I trust to inform my self-worth?
Navigating intellectual and rationalist literature, I’ve grown skeptical of many other potential evaluators. Self-judgment carries inherent bias and ability to Goodhart. Many “personal coaches” and even “executive coaches” raise my epistemic alarm bells. Friends, family, and people who are “more junior” come with different substantial biases.
Some favored options are “friends of a similar professional class who could provide long-standing perspective” and “professional coaches/therapists/advisors.”
I’m not satisfied with any obvious options here. I think my next obvious move forward is to acknowledge that my current situation seems subpar and continue reflecting on this topic. I’ve dug into the literature a bit but haven’t found answers I’ve yet found compelling.
This is an interesting idea, thanks for raising it!
I think intuitively, it worries me. As someone around hiring in these sorts of areas, I’m fairly nervous around the liabilities that come from hiring, and this seems like it could increase these. (Legal, and just upsetting people).
I’m imagining:There’s a person who thinks they’re great, but the hiring manager really doesn’t see it. They get rejected.
They decide to work on it anyway, saying they’ll get the money later.
They continue to email the org about their recent results, hoping to get feedback, sort of similar to as an employee.
6-20 months later, they have some work, and are sure that it deserves funding.
The work isn’t that great, and the prize is denied.
They get really upset that their work has been denied.
This system could create “pseudo-employees” who are trying to act as employees, but aren’t really employees. This just seems pretty messy.
In addition, funding seems tricky. Like, a lot of research nonprofits don’t have that much extra funding allocated in their budgets for this. I imagine it would have to be coordinated with funders, on-demand. (“Hey, funder X… person Y, who we rejected, just did good work, and now we need $160k to fund them. Can you donate that money to us, so we can retrospectively pay them?”)
I could also see the tax/legal implications as messy, though that could be resolved with time.
Generally, if someone seems pretty strong and capable of doing independent work, I suggest they apply to the LTFF, and say that I could help discuss their application. The LTFF funds a lot of people at this point. Small funders like the LTFF seem like great escape hatches for these situations. So this technique would really make sense, I assume, if both the LTFF rejects them, and I’m pretty confident they have a solid chance of doing good research. This is pretty unusual.
It’s quite possible the benefits overcome these negatives. I’m not sure, I just wanted to share my quick feelings on this.
I enjoyed this, thanks!
Brief thoughts:
1. Very thankful for the teams that run these! I got a lot of value from them.
2. Obvious comment, but I’d be interested in more EAG Virtual conferences. It’s possible they don’t seem as cool, but maybe that’s partially fixable. I’d expect this to cut down on much of the expense. I liked the EAG Virtual, during the pandemic, that I went to.
3. It seems healthy to me to raise prices over time, maybe up to full-cost or even over (small profit margin)? I think EA would be better if people paid more for services they used.
4. If one were to estimate the value of EAG in terms of something like, “quality-adjusted person times interaction-time”, I would expect that there could be more small events that could be cost-effective.
5. I’d feel good about experimentation. Even, take a year or two off from EAGs and try out very different kinds of events. We’re in this for the long-term, I think more exploration could make sense.
6. If OP is paying for much of it, I’d really like for them to state what their logic model for what they think the value is. I feel nervous being subsidized to do something, when it’s not very clear to me exactly what that reasoning is.
7. On that note, I’d of course be interested to better understand the model of where CEA is thinking the value comes from. I have multiple hypotheses here.
8. I’ve noticed that at some of the EAGs I attended, the venues would kick us out pretty early, which seems to have created some lost value.
Hi Max,
Thanks for clarifying your reasoning here.
Again, if you think CEA shouldn’t expand, my guess is that it shouldn’t.
I respect your opinion a lot here and am really thankful for your work.
I think this is a messy issue. I tried clarifying my thoughts for a few hours. I imagine what’s really necessary is broader discussion and research into expectations and models of the expansion of EA work, but of course that’s a lot of work. Note that I’m not particularly concerned with CEA becoming big; I’m more concerned with us aiming for some organizations to be fairly large.Feel free to ignore this or just not respond. I hope it might provide information on a perspective, but I’m not looking for any response or to cause controversy.
What is organization centrality?
This is a complex topic, in part because the concept of “organizations” is a slippery one. I imagine what really matters is something like, “coordination ability”, which typically requires some kind of centralization of power. My impression is that there’s a lot of overlap in donors and advisors around the groups you mention. If a few people call all the top-level shots (like funding decisions), then “one big organization” isn’t that different from a bunch of small ones. I appreciate the point about operations sharing; I’m sure there are some organizations that have had subprojetts that have shared fewer resources than what you described. It’s possible to be very decentralized within an organization (think of a research lab with distinct product owners) and to be very centralized within a collection of organizations.
Ideally I’d imagine that the choice of coordination centralization would be quite separate from that about the formal Nonprofit structure. You’re already sharing operations in an unconventional way. I could imagine cases where it could makes sense to have many nonprofits under a single ownership (even if this ownership is not legally binding), perhaps to help for targeted fundraising or to spread out legal liability. I know many people and companies own several sub LLCs and similar, I could see this being the main case.
“We will continue to do some of the work we currently do to help to coordinate different parts of the community—for instance the EA Coordination Forum (formerly Leaders Forum), and a lot of the work that our community health team do. The community health team and funders (e.g. EA Funds) also do work to try to minimize risks and ensure that high-quality projects are the ones that get the resources they need to expand.“
-> If CEA is vetting which projects get made and expand, and hosts community health and other resources, then it’s not *that* much different from technically bringing in these projects formally under its wing. I imagine finding some structure where CEA continues to offer organizational and coordination services as the base of organizations grows, will be a pretty tricky one.
Again, what I would like to see is lots of “coordination ability”, and I expect that this could go further with the centralization of power with capacity to act on it. (I could imagine funders who technically have authority, but don’t have the time to do much that’s useful with it). It’s possible that if CEA (or another group) is able to be a dominant decision maker, and perhaps grow that influence over time, then that would represent centralized control of power.
What can we learn from the past?
I’ve heard of the histories of CEA and 80,000 Hours being used in this way before. I agree with much of what you said here, but am unsure about the interpretations. What’s described is a very small sample size and we could learn different kinds of lessons from them.
Most of the non-EA organizations that I could point to that have important influence in my life are much bigger than 20 people. I’m very happy Apple, Google, The Bill & Melinda Gates Foundation, OpenAI, Deepmind, The Electronic Frontier Foundation, Universities, The Good Food Institute, and similar, exist.
It’s definitely possible to have too many goals, but that’s relative to size and existing ability. It wouldn’t have made sense for Apple to start out making watches and speakers, but it got there eventually, and is now doing a pretty good job at it (in my opinion). So I agree that CEA seems to have over-applied itself, but don’t think that means it shouldn’t be aiming to grow later on.
Many companies have had periods where they’ve diversified too quickly and suffered. Apple, famously, before Jobs came back, Amazon apparently had a period post-dot-com bubble, arguably Google with Google X, the list goes on and on. But I’m happy these companies eventually fixed their mistakes and continued to expand.
“Many Small EA Orgs”
“I hope for a world where there are lots of organizations doing similar things in different spaces… I think we’re better off focusing on a few goals and letting others pick up other areas….”
I like the idea of having lots of organizations, but I also like the idea of having at least some really big organizations. The Good Food Institute now seems to have a huge team and was just created a few years ago, and they seem to correspondingly be taking big projects.
I’m happy that we have few groups that coordinate political campaigns. Those seem pretty messy. True, the DNC in the US might have serious problems, but I think the answer would be a separate large group, not hundreds of tiny ones.
I’m also positive about 80,000 Hours, but I feel like we should be hoping for at least some organizations (like The Good Food Institute) to have much better outcomes. 80,000 Hours took quite some time to get to where it is today (I think it started in around 2012?), and is still rather small in the scheme of things. They have around 14 full time employees; they seem quite productive, but not 2-5 orders of magnitude more than other organizations. GiveWell seems much more successful; not only did they also grow a lot, but they convinced a Billionaire couple to help them spin off a separate entity which now is hugely important.
The costs of organizational growth vs. new organizations
Trust of key figures
It seems much more challenging to me to find people I would trust as nonprofit founders than people I would trust as nonprofit product managers. Currently we have limited availability of senior EA leaders, so it seems particularly important to select people in positions of power who already understand what these leaders consider to be valuable and dangerous. If a big problem happens, it seems much easier to remove a PM than a nonprofit Executive Director or similar.Ease
Founding requires a lot of challenging tasks like hiring, operations, and fundraising, which many people aren’t well suited to. I’m founding a nonprofit now, and have been having to learn how to set up a nonprofit and maintain it, which has been a major distraction. I’d be happier at this stage making a department inside a group that would do those things for me, even if I had to pay a fee.It seems great that CEA did operations for a few other groups, but my impression is that you’re not intending to do that for many of the new groups you are referring to.
One related issue is that it can be quite hard for small organizations to get talent. Typically they have poor brands and tiny reputations. In situations where these organizations are actually strong (which should be many), having them be part of the bigger organization in brand alone seems like a pretty clear win. On the flip side, if some projects will be controversial or done poorly, it can be useful to ensure they are not part of a bigger organization (so they don’t bring it down).Failure tolerance
Not having a “single point of failure” sounds nice in theory, but it seems to me that the funders are the main thing that matters, and they are fairly coordinated (and should be). If they go bad, then little amount of reorganization will help us. If they’re able to do a decent job, then they should help select leadership of big organizations that could do a good job, and/or help spin-off decent subgroups in the case of emergencies.I think generally effort going into “making sure things go well” is better than effort going into “making sure that disasters won’t be too terrible”; and that’s better achieved by focusing on sizable organizations.
Smaller failure tolerance could also be worse with a distributed system; I expect it to be much easier to fire or replace a PM than to kick out a founder or move them around.
Expectations of growth
One question might be how ambitious we are regarding the growth of meta and longtermist efforts. I could imagine a world where we’re 100x the size, 20 years from now, with a few very large organizations, but it’s hard to imagine how many people we could manage with tiny organizations.
TLDR
My read of your posts is that you are currently aiming for / expecting a future of EA meta where there are a bunch of very small (<20 person) organizations. This seems quite unusual compared to other similar movements I’m aware of. Very unusual actions often require much stronger cases than usual ones, and I don’t yet see it. The benefits of having at least a few very powerful meta organizations seems greater than the costs.
I’m thankful for whatever work you decide to pursue, and more than encourage trying stuff out, like trying to encourage many small groups. I think I mainly wouldn’t want us to over-commit to any strategy like that though, and I also would like to encourage some more reconsideration, especially as new evidence emerges.
I honestly think this was one of the more obvious ones on the list. 39k for one full year of work is a bit of a steal, especially for someone who already has the mathematical background, video production skills, and audience. I imagine if CEA were to try to recreate that it would have a pretty hard time, plus the recruitment would be quite a challenge.
Thanks for the post, I found that interesting!
Sorry you felt like you’d make mistakes here. We all make mistakes, I make them constantly.
I look forward to your future posts.
[Edit: I have a reply to this in the comments]
I think it’s nice, but I also think we should be raising the bar of the evidence we need to trust people.
SBF and the inner FTX crew seemed very EA. SBF had a utilitarian blog[1] that I thought was pretty good (for the time, it was ~2014).
He repeatedly spoke about how important it was for crypto exchanges to not do illegal activity. He even actively worked to regulate the industry.
I’d bet that SBF spent a lot more effort speaking and advocating about the importance of trustworthiness in crypto, then perhaps any of us on the importance of trust and regularly-good moral principles.
Sam literally argued for trust and accountability to congress.
From what I understand, he was the poster boy for what trustworthy crypto looks like.
We at very least could really use measures that would have caught a SBF-lite.
> EA posts are very unlike company virtue statements.
Sure, but SBF definitely got through. I’m sure any of his co-conspirators also would have. EA-adjacent people an clearly fool EAs using these sorts of methods.
(I considered raising this issue more in the first post, but am happy to add it now that there’s push-back.)- ^
I can’t find the blog now, and wouldn’t be surprised if it were no longer online. It’s possible I’m misremembering. I remember the blog having like ~6 posts or so, in around 2014. If anyone else has a link, it seems valuable to share it.
- ^
I think I agree with like 80% of this. But I think it should be flagged more that when many people try “engaging writing”, they do end up with stuff that’s really bad.
For example the Copyblogger website seems full of encouraging classic clickbait headlines, like:
“Here’s why Netflix streaming quality has nosedived over the past few months”
″12 Of The Most Stunning Asian Landscapes. The Last One Blew Me Away.”I don’t want to see stuff like that on the EA Forum.
Similarly, I found the title of this post hyperbolic (you also call attention to this, but several paragraphs in). I don’t want to encourage many more people to make titles like that. (Though I would encourage images, elegance, plain language, jokes, and so on).
So I think EA writers can definitely improve on being engaging, but we should make sure to steer clear of the alarmist journalist techniques.
Thanks for this, I feel like I’ve seen this too.
I’m 30 now, and I feel like several of my altruistic-minded friends in my age group in big companies are reluctant to work in nonprofits for stated reasons that feel off to me.
My impression is that the EA space is quite small now, but has the potential to get quite a big bigger later on. People who are particularly promising and humble enough to work in such a setting (this is a big restriction) sometimes rise up quickly.
I think a lot of people look at initial EA positions and see them as pretty low status compared to industry jobs. I have a few responses here:
1) They can be great starting positions for people who want to do ambitious EA work. It’s really hard to deeply understand how EA organizations work without working in one, even in (many, but not all) junior positions.
2) One incredibly valuable attribute of many effective people is a willingness to “do whatever it takes” (not meaning ethically or legally). This sometimes means actual sacrifice, it sometimes means working positions that would broadly be considered low status. Honestly I regard this attribute as equally important to many aspects of skills and intelligence. Some respected managers and executives are known for cleaning the floors or providing personal help to employees or colleagues, often because those things were highly effective at that moment, even if they might be low status. (Honestly, much of setting up or managing an organization is often highly glorified grunt work).
Personally, I try to give extra appreciation to people in normally low-status positions, I think these are very commonly overlooked.---
Separately, I’m really not sure how much to trust the reasons people give for their decisions. I’m sure many people who use the “overqualified” argument would be happy to be setting up early infrastructure with very few users for an Elon Musk venture, or building internal tooling for few users at many well run, high paying, and prestigious companies.
I feel like this is a cheap shot, and don’t like seeing it on the top of this discussion.
I think it can be easy to belittle the accomplishments of basically any org. Most startups seem very unimpressive when they’re small.
A very quick review would show other initiatives they’ve worked on. Just go to their tag, for instance:
https://forum.effectivealtruism.org/topics/nonlinear-fund
(All this isn’t to say where I side on the broader discussion. I think the focus now should be on figuring out the key issues here, and I don’t think comments like this help with that. I’m fine with comments like this in smaller channels or with way fewer upvotes, but feel very awkward seeing this on top.)
I also want to flag that I believe Pablo explicitly agreed to have this review made public. I think it would have been very easy for this to be kept private, and I think the value is much greater of it being public, so I’m very thankful he did this.
As someone who’s spent a fair amount of time with the SV startup scene (have cofounded multiple companies) and the EA scene, I’d flag that the cultures of at least these two are quite different and often difficult to bridge.
Most of the large EA-style projects I’d be excited about are ones that would require a fair amount of buy-in and trust from the senior EA community. For example, if you’re making a new org to investigate AGI safety, bio safety, or expand EA, senior EAs would care a lot about the leadership having really strong epistemics and understand of existing EA thinking on the topic.
One problem is that entrepreneurship culture can present a few challenges:
1) There’s often a lot of overconfidence and weird epistemics
2) Often there’s not much spare time to learn about EA concepts
3) Leaders often seem to grow egosThe key thing, to me, seems to be some combination of humility and willingness to begin at the bottom for a while. I think that becoming well versed in EA/longtermism enough to found something important, can often require beginning in a low-level research role or similar.
One strategy some people give is something like, “I don’t care about buy-in from the EA community, I could start something myself quickly, and raise a lot of other money”. In sensitive areas, this can get downright scary, in my opinion.
Of my current successful entrepreneur friends, I can’t see many of them going the “go low-status for a few years route”, but I could see some. Most people I know don’t seem to want to go down a few status and confidence levels for a while.
There are definitely some prominent examples in EA of people who have done similar things (I’d flag Ben West, who seems have pulled off a successful transition, and is discussed in these comments), but there aren’t all too many.
The FHI RSP program was a nice introductory program, but was definitely made more for researchers than entrepreneurs. I could imagine us having similar transitionary programs for entrepreneur-types in the future. There are probably some ways more programs and work in this area could make things easier; for instance, they could seem really prestigious (flashy branding), in part to make it more palatable for people taking status-decreases for a while.
If there are successful entrepreneurs out there reading this interested in chatting, I’d of course be happy to (just message me), though I’m sure 80k and other groups would be interested as well.(Note: I think Charity Entrepreneurship gets around this a bit by first, focusing on younger people with potential to be entrepreneurs, rather than people who are already very successful, and second, focusing on particular interventions that can be done more independently.)
I thank you for apologizing publicly and loudly. I imagine that you must be in a really tough spot right now.
I think I feel a bit conflicted on the way you presented this.
I treat our trust in FTX and dealings with him as bureaucratic failures. Whatever measures we had in place to deal with risks like this weren’t enough.
This specific post reads a bit to me like it’s saying, “We have some blog posts showing that we said these behaviors are bad, and therefore you could trust both that we follow these things and that we encourage others to, even privately.” I’d personally prefer it, in the future, if you wouldn’t focus on the blog posts and quotes. I think they just act as very weak evidence, and your use makes it feel a bit like otherwise.
Almost every company has lots of public documents outlining their commitments to moral virtues.
I feel pretty confident that you were ignorant of the fraud. I would like there to be more clarity of what sorts of concrete measures were in place to prevent situations like this, and what measures might change in the future to help make sure this doesn’t happen again.
There might also be many other concrete things that could be done to show your (and other senior people’s) care about these values.
Again, I appreciate the words, but if there’s one thing that the recent scandal taught us, it’s that it’s hard to take much from words. I don’t blame you here—but I would like us to have a culture where EAs can focus on evidence of credibility that’s much more high-signal than a list of previous altruistic writings.
All that said, I imagine that more rigorous evidence here will take more time.