I found this very concerning. I posted it but then a helpful admin showed me where it was already posted, I need to be better at searching :D
When we consider the impact of this, we need to forget for a moment everything we know about EA and imagine the impact this will have on someone who has never heard of EA, or who has just a vague idea about it.
I do not agree at all with the content of the article, and especially not with the tone of the article, which frankly surprised me from the Guardian. But even this shows how marginal EA is, even in the UK—that one columnist can write a pretty ill-informed and unresearched article, and apparently nobody challenged it.
BUT: I also see an opportunity. If someone credible from the UK EA community were to write an even, balanced rebuttal of this piece, that might turn this into a positive. Focusing on the way that people like Tony Ord choose to live frugally and donate most of their salary to good causes as being far more reflective of EA than the constant reference to SBF (who of course is one of the very few EA’s mentioned in the article).
I’m not sure the editors at the Guardian realise how closely EA’s philosophy aligns with many of the values they promote, and maybe this is a chance to change that and get some positive publicity.
Denis
Is it random that this appeared in the New York Times yesterday, or are the two related?
How Do We Know What Animals Are Really Feeling? - The New York Times (nytimes.com)
Regardless, it is great to see more realisation and communication around this topic. Most people just do not make any mental association between “food” and “animal suffering”. One day this will all appear utterly barbaric, the way slavery appears barbaric to us today even though some highly reputed figures throughout history owned slaves.
The more communication we have around animal consciousness and suffering, the faster that will happen.
The best kind of communication may well be the kind that is not “accusatory”—just informative. Let people think about it for themselves rather than telling them what to think.
Ultimately, maybe the best hope for ending animal suffering is alternative protein, and it is shocking how little money and effort is committed to this, given that it’s also critical for climate, for hunger-reduction, for resilience. Alternative protein offers the potential to tell people “here is a cheaper, healthier, tastier, climate-friendlier… alternative to meat, which also avoids animal suffering.”
There are thousands of people who would jump on that statement and say it’s unrealistic, but it’s absolutely not. It’s just that we’re not treating it like the emergency that it is, we’re not putting the same resources into it that we’re putting into making more powerful iphones. We could choose to.
I’ve had quite a few disagreements with other EA’s about this, but I will repeat it here, and maybe get more downvotes. But I’ve worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently.
EA is part of a real world which isn’t necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do—it impacts our ability to get donations, to carry out projects, to influence policy.
We all believe we’re willing to make sacrifices to help EA succeed.
Here’s the hard part: Sometimes the sacrifice we have to make is to go against our own natural desire to do what feels right.
It feels right that Will and other people from EA should make public statements about how bad we feel about FTX and how we’ll try to do better in future and so on.
But the legal advice Will got was correct, and was also what was best for EA.
There was zero chance that the FTX scandal could reflect positively on EA. But there were steps Will and others could take to minimise the damage to the EA movement.
The most important of these is to distance ourselves from the crimes that SBF committed. He committed those crimes. Not EA. Not Will. SBF caused massive harm to EA and to Will.
I see a lot of EA’s soul-searching and asking what we could have done differently. Which is good in a way. But we need to be very careful. Admitting that we (EA movement) should have done better is tantamount to admitting that we did something wrong, which is quickly conflated in public opinion with “SBF and EA are closely intertwined, one and the same.” (Remember how low public awareness of EA is in general).
The communication needs to be: EA was defrauded by SBF. He has done us massive harm. We want to make sure nobody will ever do that to EA again. We need to ensure that any public communication puts SBF on one side, and EA on the other side, a victim of his crimes just like the millions of investors.
The fact that he saw himself as an EA is not the point. Nobody in EA encouraged him to commit fraud. People in EA may have been a bit naive, but nobody in EA was guilty of defrauding millions of investors. That was SBF.
So Will’s legal advice was spot on. Any immediate statement would have seemed defensive, as if he had something to feel guilty about, which would have resulted in more harm to the public perception of EA because of association with SBF.SBF committed crimes.
Will or EA did not commit crimes, or contribute to SBF’s crimes.
SBF defrauded and harmed millions of investors.
SBF also defrauded and harmed the EA movement.
The EA movement is angry with SBF. We want to make sure that nobody ever does that to us again.
As “good people”, we all want to look back and ask if there was something we could have done differently that would have prevented Sam from harming those millions of innocent investors. It is natural to wonder, the same way we see any tragedy and wonder if we could have prevented it. But we need to be very careful about the PR aspects of this (and yes, we all hate PR, but it is reality—read Pirandello if you don’t believe me!). If we start making statements that suggest that we did something wrong, we’re just going to be directing some of the public anger away from SBF and towards EA. I don’t think that’s helpful.
There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened.
This is a brilliant and necessary post—as is the link you share to the 2019 post. Thank you!
When I first because interested in EA, the message I saw everywhere was “pivot! devote your career to being impactful!”
The implication was that EA is massively talent-limited. I now know that this was not the case.
There are a lot of people who would like to do impactful work
But it’s not just typical EA work. The same holds true for wanting to work on climate-change—an area which includes many people who have never heard of EA. Or animal-welfare. Or whatever. I am on a Slack containing 29,000 people, many highly qualified and motivated, who want to work on climate.
I suppose we should not be surprised—indeed we should be encouraged—to find that impactful careers are much in demand. It’s a sign that there are many people out in this world who are not as cynical and self-centred as some politicians would have us believe.
An economist might look at it in this way: the satisfaction of knowing that you are doing good is a form of payment, which makes the job more appealing and/or enables the job to be filled at lower salary and/or with tougher requirements. If you have a very impactful position for a role that would normally command $100K / year on the normal job-market, you can probably offer $60K and get lots of great candidates, and you could even insist that they come to the office every day by 8.00 a.m. (please don’t!)
Impactful roles are resource-limited, not talent-limited
Looked at from a broader perspective, we can all see that impactful roles are resource limited. If we compare the number of people working on climate-change, or alternative protein, or AI governance, to the number of people who should be working in these areas in a world in which resources were distributed according to the value of the work being done, there might be 100x as many EA roles as there are today.
If there were a carbon-market which reflected the true cost of carbon, then work to reduce or eliminate CO2 emissions, or to capture carbon, would be highly lucrative, and many more roles would be funded. If governments truly understood the dangers of AI (or if public-opinion forced them to understand it), it’s likely that much more funding would be put into work in this field. And so on.
But it’s not happening right now. And so the majority of EA’s who would like to work in impactful roles just don’t have that opportunity. So what should we do?
What should we do?
One option is to give up. Very few EA’s will do that. Because EA comes from caring about the world’s problems, you can’t just decide to stop caring.
Two very practical options are earning-to-give and/or volunteering. Both of these involve “separating” your career-path from your EA role, but still using your career to enable you to help EA by freeing up your time or your money. [A good analogy here is how many people who dream of being writers, actors, musicians, etc. find much more happiness and freedom when they decide that this need not be their primary source of income. They sometimes end up writing better books and making better music too.]
But, in parallel with this, there are areas where commitment and grit are far more critical than brilliance, and maybe (perhaps while earning in a normal job), these are areas where we all could focus.
Policy / Political Agitation / Grassroots work
Maybe the most promising area (IMHO) for us mediocre EA’s to focus is in policy and even politics. This can be done in parallel to a “real” job. It can be about joining local groups (EA or not) and pushing for policy changes. It can be about writing to our local politicians and attending their meetings. It can be about getting involved at grass-roots level. None of these things require not being mediocre.
For example, any sensible analysis tells me that we should be investing huge resources into artificial protein. For so many reasons. But not only is this not happening, but there are places where the agriculture lobby is pushing for alternative protein-based foods to be banned, or to be forced to have off-putting labels. And they are winning. It’s absurd. Maybe 1000 mediocre EA’s, even without tractors, could protest in Brussels or London or Washington to fight this short-sighted type of policy-making. But maybe one or two passionate mediocre EA’s could start a movement, or join a political party and start something within that party. If 5 mediocre EA’s who are struggling to find roles within EA were to decide to form a group in their local country, to get some advice from groups like GFI who work in the area, and to just agitate for policy change with more support and fewer misguided restrictions for alternative protein, it could be hugely impactful.
I’m sure there are plenty of other examples. But my point is: success in this area is probably much more related to commitment and grit than it is to brilliance.
Counterfactuals
Before concluding that anything that isn’t a direct EA-job is somehow less impactful, it’s important to consider counterfactual value add.
Maybe in the current situation, with so many brilliant people wanting to work in EA, the counterfactual value I might add by working in an “EA job” could even be negative, if the person who might have taken that job instead might have been better than me in that role.
On the other hand, the counterfactual value of taking the GWWC pledge and earning to give while doing valuable work (even if not super-impactful by EA standards) is definitely very positive. And the counterfactual value of doing the unglamourous work of pushing your local politicians towards voting for better policies on vital issues like alternative protein might be huge—even if nobody (not even you) will ever realise or recognise how much value you’ve added.
Add to this that in many roles (teaching, health-care, public-service, …) there is a great capacity for doing good, for being impactful. And even in roles which are seen as the most mundane (think a “middle-manager” in a soap-company), there can be huge potential to help individuals, to improve sustainability, to coach young employees to be better members of society, to promote more inclusive policies, or whatever. There is so much potential to do good and have an impact if we choose to.
Apologies
I’ve been thinking about this a lot, so the above is a rather incoherent attempt to put together some thoughts after midnight, and it’s ended up longer than I intended.
I’ve used the word “mediocre” as it was used in the title. I think both the post-author and I fully realise that nobody is mediocre. I appreciated the use of a provocative term to make us think more deeply about it! So at least in my usage, I was being intentionally ironic (in case that wasn’t obvious). And even among people who are not mediocre, because none of us are, people who choose to devote their careers to EA are even less mediocre than others, in a sense that is illogical but still kind of says what I mean to say. Whether they have the specific, narrow skill-set for a specific role, whether they happen to be in the right place at the right time to get that role, etc., are just details.
These are great posts, thank you for writing them!
IMHO EA’s can vastly improve our effectiveness by focusing more on effective communication. Your articles are definitely a step in the right direction.
There is an opportunity to go a lot further if we can also do more to adjust the style of our communication.
To me, and to most EA’s, your articles are beautifully written, with crystal-clear reasoning. However, we need to keep reminding ourselves that, in a sense, we are outliers, we people who like to communicate in this way. We focus on content and precision and logic and data.
We can learn a lot from people who communicate very effectively in very different ways. Look at beer commercials. Look at TicToc influencers. Look at Donald Trump (really—he is obnoxious and wrong, but he is a very effective communicator in the sense that his communication serves his cynical, obnoxious purpose very well within his target audience—the fact that so many liberals refuse to admit this and fight fire with fire is a big reason why he could still win).
Most people hated maths in school, they didn’t study philosophy or logic. When we communicate only in the style that we feel comfortable with, we’re almost excluding them—while allowing others to communicate to them. So they end up believing the wrong people.We want to be reasonable and logical, and convince people one by one. But most great communicators (including many good people, like MLK or JFK or Obama) realise that many people want to feel part of something bigger than themselves. It is part of what we are as humans. Trump knows this. His ardent supporters value loyalty to their group more than they value truth or logic or science. If you’ve ever been a fan of a sports team, you will know this feeling. Obama’s “Yes we can” was also a movement. it allowed adherents to answer “yes we can” when they faced obstacles, even obstacles that seemed impossible to overcome.
As EA’s, we’re not comfortable with this kind of talk. Every article starts with reasons why it might not be correct. This is great from a philosophical POV, but not great for mass audiences.
Recently Daniel Kahneman died. He wrote the wonderful book “Thinking Fast and Slow” which talks about how, most of the time, people will jump to an immediately obvious conclusion—which is often wrong—rather than analysing a question in detail. Great mass-communicators realise this—they do not depend on people making the mental effort to study an issue, but rather they look at ways to manipulate their fast-thinking mode. Beer-commercials create a mental link between drinking beer and being surrounded by fun, attractive people in exciting locations. Laundry commercials create a mental link between using their products and having a nice suburban home and a happy family. And so on.
The SBF communication is a perfect example. Millions of people form the easy connection between SBF, Fraud, Opulence, EA. They conclude that EA is an excuse for rich people to justify getting really rich while making themselves feel good about themselves. This is based on exactly one data-point—but it’s the one that the public knows. Most people are not interested enough in EA to invest time to read complex arguments about why this is wrong. It may even make them feel happy to see “self-righteous do-gooders” taken down a peg.**
Ironically, most charities are seen positively. This is because they communicate in a very different way to EA’s. They show pictures of individuals suffering, they present themselves as caring and empathetic and emotional. They show the sacrifices they make to help others. Most EA’s would not be comfortable communicating about EA in this way, but maybe we need to focus on the word “Effective” in our title, and get out of our comfort zone. Because this kind of communication can be much more powerful than our logic / facts / data-based communication. Certainly, it can powerfully complement it.
I always cite climate-denial as the great example of our time. There is no doubt that the scientists are right, that the IPCC recommendations are correct. Even the oil-companies and the vehement climate deniers know this. But still, in terms of communication, they beat the scientists hands down.We scientists focus on facts and logic and data, and that makes us lazy. It’s convenient for us, it’s our language. Deniers know that they don’t have logic on their side, so they are forced to optimise their communication. They find ways to make it about tradition, about pride, about emotions. They find stories of individuals who will be harmed by climate-action, and turn them into victim-heroes, fighting against the cynical scientists. The obfuscate the data, not randomly, but in a way that they have learned from focus groups will create just enough doubt among most people. They strategically do not deny things that can easily be proven to non-scientists, but instead propose things like “let’s wait until we have more evidence” which sound reasonable to anyone who doesn’t have the time and energy to delve deeply into what it really means (more climate-damage).
There is a certain irony that I’m making this point while writing badly in the style I’m saying isn’t very effective for mass-communication. But it’s what I’m comfortable with too. But my point is: there are people out there who have studied, scientifically, which methods of communication are effective with “the public”. Politicians learn from them. The EA movement could do so too.
Ultimately, we are right (I think) on most of the points we argue; it would be very valuable to get more and more people thinking the way EA’s do. We should not limit this to people who also like to communicate the way EA’s do.**The very existence of the term “do-gooder” is proof of this—there is no conceivable logical reason why people should hate a person who does good, but they do. Bono is consistently considered the most hated person in Ireland, in a close contest with Bob Geldof—because both are classed as “do-gooders” who need to get off their high-horses. It’s not about people thinking deeply about the good they actually do, or questioning whether they truly add value. it’s about people not being comfortable with the idea of people making them feel uncomfortable about themselves. By criticising them, we’re allowing ourselves to feel better about not doing anything. I think that sometimes EA’s could be seen in a similar way.
I upvoted this comment because it is a very valuable contribution to the debate. However, I also gave it an “x” vote (what is that called? disagree?) because I strongly disagree with the conclusion and recommendation.
Very briefly, everything you write here is factually (as best I know) true. There are serious obstacles to creating and to enforcing strict liability. And to do so would probably be unfair to some AI researchers who do not intend harm.
However, we need to think in a slightly more utilitarian manner. Maybe being unfair to some AI developers is the lesser of two evils in an imperfect world.
I come from the world of chemical engineering, and I’ve worked some time in Pharma. In these areas, there is not “strict liability” as such, in the sense that you typically do not go to jail if you can demonstrate that you have done everything by the book.
BUT—the “book” for chemical engineering or pharma is a much, much longer book, based on many decades of harsh lessons. Whatever project I might want to do, I would have to follow very strict, detailed guidelines every step of the way. If I develop a new drug, it might require more than a decade of testing before I can put it on the market, and if I make one flaw in that decade, I can be held criminally liable. If I build a factory and there is an accident, they can check every detail of every pump and pipe and reactor, they can check every calculation and every assumption I’ve made, and if just one of them is mistaken, or if just one time (even with a very valid reason) I have chosen not to follow the recommended standards, I can be criminally and civilly liable.
We have far more knowledge about how to create and test drugs than we have on how to create and test AI models. And in our wisdom, we believe it takes a decade to prove that a drug is safe to be released on the market.
We don’t have anything analogous to this for AI. So nobody (credible) is arguing that strict liability is an ideal solution or a fair solution. The argument is that, until we have a much better AI Governance system in place, with standards and protocols and monitoring systems and so on, then strict liability is one of the best ways we can ensure that people act responsibly in developing, testing and releasing models.
The AI developers like to argue that we’re stifling innovation if we don’t give them totally free-rein to do whatever they find interesting or promising. But this is not how the world works. There are thousands of frustrated pharmacologists who have ideas for drugs that might do wonders for some patients, but which are 3 years into a 10-year testing cycle instead of already saving lives. But they understand that this is necessary to create a world in which patients know that any drug that is prescribed by their doctor is safe for them (or that it’s potential risks are understood).
Strict liability is, in a way, telling AI model developers: “You say that your model is safe. OK, put your money where your mouth is. If you’re so sure that it’s safe, then you shouldn’t have any worries about strict-liability. If you’re not sure that it’s safe, then you shouldn’t be releasing it.”
This feels to me like a reasonable starting point. If AI-labs have a model which they believe is valuable but flawed (e.g. risk of bias), they do have the option to release it with that warning—for example to refuse to accept liability for certain identified risks. Lawmakers can then decide if that’s OK or not. It may take time, but eventually we’ll move forward.
Right now, it’s the Wild West. I can understand the frustration of people with brilliant models which could do much good in the world, but we need to apply the same safety standards that we apply to everyone else.Strict liability is neither ideal nor fair. It’s just, right now, the best option until we find a better one.
This is awesome. If every recruiter gave feedback like that, it would help so much. Thanks for setting such a great example!
This is a great article. It is really unfortunate when a good candidate puts a lot of work into an application and it is rejected for a reason that doesn’t reflect their ability to do the job.
That said, we all need to accept that we live in a bizarre world in which we say we want engaged, motivated, qualified people working on impactful areas, but then, when they choose to do so, it can be extremely difficult for those engaged, motivated people to actually find impactful roles.
It seems like many EA roles get 100′s of applications (literally). And because hirers are open-minded, they encourage everyone to apply, even if they’re not sure they’re a good fit.
One result of this is that a vast amount of the energy and commitment of EA’s is invested into the task of searching for work (on one side) or in evaluating and selecting applicants on the other side.It just feels unfortunate, in the sense that if this energy could be invested in something impactful, it would be better. Ultimately a great CV and cover-letter doesn’t help any humans or animals.
I don’t have a solution. Obviously there are just not so many roles out there, and we can’t just create roles without funding and organisations and managers and so on. And we don’t want to discourage people from applying for roles they think they could do well.
This has been a pet peeve of mine since my pre-EA days. I wrote about it from the perspective of a recruiter on Quora, and more than 1000 people upvoted my answer. So it’s definitely not an EA-specific problem.In fact, I would go further and say that EA organisations do a lot of things far better than most organisations:
They often put a lot of emphasis on work-tests, which are far better than interviews at assessing a person’s fit for a role—and which are also a great learning experience even for the people who don’t get hired.
Many recruiters do give feedback. Useful, tangible feedback. Often this only happens after the initial screening.
Some recruiters even go out of their way to help applicants find an impactful role, because, unlike corporations, we’re all rooting for each other to succeed.
But even still, it would be great if there were a better way to get more people into roles (even if initially low-paid roles, with the potential for upgrading) in which they learn and get experience they can put on their CV’s, rather than have them desperately trying to find a role.
I kind of imagine that in some EA-hub locations, this is what happens. That lots of people know each other and can recommend roles for each other. I see something like this in the Brussels EU bubble, where once you’re part of the community, it seems like there are always roles opening up for people who need to or want to move. So maybe what I’m writing refers more to people living away from EA hubs, who would like to switch to more impactful roles, but struggle to find one. Unfortunately, if we don’t find a way to include these people, the potential growth of EA will be limited.
For now, all I can do is strongly encourage any recruiter to provide any critical feedback they can. Maybe not to everyone, but if there is someone who is clearly doing something wrong (several typos on their CV for example), please tell them. I’ve reviewed a lot of CV’s and job applications, and I can say that I’ve never had a negative reaction when I sent someone a quick note to explain how they could improve their chances to get other roles (always phrased this way to avoid suggesting that was the reason they weren’t hired by us).
I am also very curiously and closely following the new Moral Circles created by Rutger Bregman in the Netherlands to try to convince highly experienced professionals to move to more impactful roles, to see if they have a good solution to this. There seems to be a lot of people hearing his message, I want to see how they manage the challenge of making sure that all the very capable people who want to do something more impactful actually find a role where they can do so.
Thank you for this comment.
I really appreciate when someone puts an explanation for why they down-voted something I wrote :D
Indeed, I knew that what I wrote would be unpopular when I wrote it. And maybe it just looks like I’m an old cynic polluting the idealism of youth. But I don’t agree that it’s naive. If anything, the naivete lies on the other side.
How can an EA not realise that damaging the EA movement is damaging to the world?
So you need to balance the potential damage to the world thought damage to EA vs the potential of avoiding damage to the world from the investigation. I have not seen any comments mentioning this, so I wrote about it, because it is important.
I’m not clear in what sense anything the EA movement did with SBF has damaged the world, unless you believe that SBF would have behaved ethically were it not for the EA movement, and that EA’s actively egged him on to commit fraud. I presume that when you refer to “naive-consequentialist reasoning”, you are referring to what happened within FTX (in addition to my own reasoning of course!), rather than to something that someone in the EA movement (other than SBF) did?I don’t know the details, but I would expect that the donations that we received from him were spent very effectively and had a positive impact on many people. (If that is not the case, that should be investigated, I’d agree!). So it is highly likely that the impact of the EA movement was to make the net impact of SBF’s fraud significantly less negative overall.
Of course, I may be wrong—I am interested to hear any specific ways in which people believe that the EA movement might be responsible for the damage SBF caused to investors, or to anyone other than the EA movement itself.
But my reading of this is that SBF caused damage to EA, and not the other way round. And there was very little that EA could have done to prevent that damage other than somehow realising, unlike plenty of very experienced investors, that he was committing fraud.
So (and again I may be wrong) I don’t see how an EA investigation will prevent harm to the world.
But I do very clearly see how an investigation could cause damage to the EA movement. The notion that we can do an investigation of what we did wrong in the SBF case and not have it perceived externally as a validation of the negative stereotype that the SBF case has projected on the EA movement is optimistic at best.
I’m not sure if this position comes from people who mostly associate with other EA’s and are just unaware of the PR problems that SBF has caused the EA movement.
Remember that there as been a long and very public trial, so all the facts are out there and public. People are already convinced that SBF did bad things.
The EA movement just needs to keep doing what we can to minimise the public’s connection between SBF and EA.
Again, to finish, I do appreciate that many people disagree with this perspective. It seems like ethically we should investigate, especially if we believe we have nothing to hide. But that’s just not how the world works.
And I really appreciate that you explained your disagreement.
The first consideration here is that EA needs to focus, primarily, on impact. That is the whole point of the movement, to maximise the positive impact we can have.
So any investigation should focus on how the SBF fiasco impacted EA’s ability to do good, and how we might address that. And also, if we’d want to change (something about EA) in order to minimise future events that could adversely impact our ability to do good. i.e. Actionable recommendations.
IMHO, looking from outside, SBF has done a lot of PR damage to EA, and we have not done a good job of responding to that. Maybe this would be a good area to focus an investigation.
One tangible example of each:I have seen countless references to EA as an excuse to justify being rich and living in luxury by saying you are “earning to give,” with SBF cited as an example. This is actively harming the EA movement. We need to get the word out that many more EA’s are like EA founder Toby Ord, who chooses to donate most of his salary and lives a spartan existence. But how?
Do we want to create some criteria for accepting donations? Honestly, I would be very hesitant to do this, since donations do a massive amount of good, so unless they’re coming from really bad people, the balance often favours accepting the donations. But if we feel that some sources will end up doing more harm to the movement than any tangible good they do, we could set up clear rules to manage such situation. Or do we want to have rules that state that, under some conditions, we’d return donations? Again, factoring in the good that each donation can do, it’s not easy.
On a more general note, we need to make it very clear that Effective Altruism is not some kind of closed society where you get accepted or rejected. The EA community is no more to blame for SBF’s crimes than the New York Yankees are to blame if one of their fans commits a homicide while on vacation in Japan.
Ultimately, if we do consider investigating this, we need to be clear that the investigation isn’t going to do further harm to the EA movement (and therefore, to all the causes that depend on the EU movement). Is there any reason to believe that doing an internal investigation will help? I mean, will anyone outside the movement feel reassured or will the trust an investigation that shows we did nothing wrong? And if some EA’s did do something wrong, or even cannot prove conclusively that they didn’t, isn’t there a risk that publishing that will massively damage the movement, disproportionately relative to any bad things done.
I don’t want to appear cynical. But right now, SBF has given the EA movement a massive PR problem. Whatever we do needs to factor that into consideration.
If there were some smoking gun type evidence suggesting that several EA’s probably did bad things, then obviously we’d need to investigate that to provide reassurance (which is also important for PR). But I haven’t heard anyone accuse anyone of that. So what do we gain?
Wow, I expected to disagree with a lot of what you wrote, but instead I loved it, and especially I appreciated how you applied the more general concept of making good use of your time to language-learning.
I really liked your list of reasons to learn a language, and that you didn’t limit it to when it is “useful”, which is so often the flaw I see in articles about language, which focus on how many dollars more you could earn if you spoke Mandarin or Spanish.
I fully agree that if you do not get energized by learning languages, if it’s a chore that leaves you tired and frustrated, then maybe your energy is better spent on other vital tasks.
One way to look at this is on a spectrum. On the left are things that are vitally important and that you do even if they are no fun. Like taxes, work-outs or dental visits. On the right are things that energize or relax you, like watching football or doing Wordle, where you don’t look for any “value” in them, you just enjoy them.
The secret of a happy, successful life is to find as many activities as possible that you could fit at both ends of the spectrum. Like playing soccer, which is both fun and healthy.For some of us, learning foreign languages is in this category. I started learning for fun, out of intellectual curiosity, but they have turned out helping me in many tangible ways that I hadn’t expected.
But for many people, learning languages doesn’t fit at either end. You don’t enjoy it, and, at least at the level you’re reaching, it doesn’t add much value to your life. For those, it probably isn’t a good use of your time compared to the many opportunities out there.
It would be great to get more people to read your article and think about it and how it applies to them—maybe even not just related to languages, but to all the things that we’re encouraged to do because they are “good” in some abstract sense.
Wow, Sarah, what a wonderful essay!
(don’t feel obliged to read or reply to this long and convoluted comment, just sharing as I’ve been pondering this since our discussion)
As I said when we spoke, there are some ideas I don’t agree with, but here you have made a very clear and compelling case, which is highly valuable and thought-provoking.
Let me first say that I agree with a lot of what you write, and my only objections to the parts I agree with would be that those who do not agree maybe do very simplistic analyses. For example, anyone who thinks that being a great teacher cannot be a super-impactful role is just wrong. But if you do a very simplistic analysis, you could conclude that. It’s only when you follow through all the complex chain of influences that the teacher has on the pupils, and that the pupils have on others, and so on, that you see the potential impact. So I would agree when you argue that someone who claims that in their role, they are 100x more impactful than a great teacher would be making a case that is at best empirically impossible to demonstrate. And so, a person who believes they can make the world better by becoming a great teacher should probably become a teacher.
And I’d probably generalise that to many other professions. If you’re doing a good job and helping other people, you’re probably having an above-average impact.
I also agree with you that the impacts of any one individual are necessarily the result of not just that individual, but also of all the influences that have made the impact possible (societal things) and of all the individuals who have enabled that person to become who they are (parents, teachers, friends, ). But I don’t think most EA’s would disagree with this.The real question, even of not always posed very precisely, is: for individuals who, for whatever reason, finds themselves in a particular situation, are there choices or actions that might make them 100x more impactful?
And maybe if I disagree on this, it’s because I’ve spent my career doing upstream research, and in upstream research, it’s often not about incremental progress, but rather about 9 failures (which add very little value) and one huge success which has a huge impact. And there are tangible choice which impact both the likelihood of success and the potential impact of that success. You can make a choice between working on a cure for cancer or on a cure for baldness. You can make a choice between following a safe route with a good chance of incremental success, or a low-probability, untested route with a high risk but the potential for a major impact.
I also think there is some confusion between the questions “can one choice make a huge impact?” and “who deserves credit for the impact?” On the latter question, I would totally that we would be wrong to attribute all the credit to one individual. But this is different from saying that there are no cases where one individual can have an outsized impact in the tangible sense that, in the counterfactual situation where this individual did not exist, the situation would be much worse for many people.
When we talked about this before (after you had given Sam and me your 30-second version of the argument you present here 😊), I think I focused on scientific research (my area of expertise). I agreed that most scientists had at best an incremental impact. Often one scientist gets the public credit for the work of 100’s of scientists, technicians, teachers and others, maybe because they happened to be the ones to take the last step. Even Nobel prize-winners are sometimes just in the right place at the right time.
But I also argued that there were cases, with Einstein being the most famous one, where there was a broad consensus that one individual had had an outsized impact. That the counterfactual case (Einstein was never born) would lead to a very different world. This is not to say that Einstein did not build on the work of many others, like Lorentz, which he himself acknowledged, or that his work was not greatly enhanced by the experimental and theoretical work of other scientists who came later, or even that some of the patents he evaluated in his patent-office role did not majorly impact his thinking. But it still remains that his impact was massive, and that if he had decided to give up physics and become a lumberjack, physics could have developed much more slowly, and we might still be struggling with technical challenges that have now been resolved for decades, like how to manage the relativistic time-differences we observe on satellites which we now use for so many routine things from tv to car navigation.
For a famous, non-scientific (well, kind of scientific) example: one of the most famous people I almost interacted with online was Dick Fosbury. One of my friends worked with him on the US Olympic committee and one time he replied to one of my comments on facebook, which is about my greatest claim to fame! It is possible (though unlikely) that if he hadn’t existed, humans might still be doing high-jumping the way they did before him. Maybe it wasn’t him specifically but one of his coaches, or maybe some random physics student, who got the idea of the Fosbury flop, but it was likely one person, with one idea, or a small group of people working on a very simple question (how to maximise the height that a jumper can clear given a fixed maximum height of the centre of gravity). Of course people jumping higher doesn’t really impact the world greatly, but it’s just a very clear example of one individual having an outsized influence on future generations.
I would argue that there are many more mundane examples of outsize impact compared to the counterfactual case.A great teacher compared to a “good” teacher can have an outsize impact, maybe inspiring them to change the world rather than just to succeed in their careers, or maybe teaching them statistics in a way that they can actually understand and enabling them to teach others.
A great boss compared to a good boss is another example. I was lucky enough to work for one boss who almost single-handedly changed the way people were managed across a massive corporation. In a 20th century culture of command & control, of bosses taking credit for subordinates’ work, but not taking the blame, of micromanaging, and of many other now-out-dated styles, he was the first one to come in and manage like an enlightened 21st century manager, as a “servant leader”. He would always take the blame personally and pass on the credit, which at the time was unheard of. At first this hurt his career, but he persevered and suddenly the senior managers noticed that his projects always did better, his teams were more motivated, his reports were more honest (without “positioning”) and so on. And suddenly many others realised that his was the way forward. And in literally a few years, there was a major change in the organisation culture. Senior old-style managers were basically told to change their ways or to leave.
This was one individual with an outsized influence. It was not obvious to most people that he personally had had that much impact, but I just happened to be right there in the middle (in the right place at the right time) and got to observe the impact he was having, to hear the conversations with him and about him, and to see how people started first to respect and then to imitate him.
So I’m not convinced in general that one person cannot have outsized impact, or that one role or one decision cannot have outsized impact.
However, maybe our views are not totally disparate. Because in many cases, I would agree that those who have outsized impact could not have predicted that they would have outsize impact, and in many cases weren’t even trying to have outsize impact. My boss was just a person who believed in treating everyone with respect and trust, and could not imagine doing differently even if it had been better for his career. Einstein was a physicist who was passionately curious, he wasn’t trying to change the world as much as to answer questions that bothered him. Fosbury wanted to win competitions, he didn’t care whether others copied him or not.
And maybe when people to have outsize impact, it’s less about their being strategic outliers (who chose to have outsize impact) and more that they are statistical outliers. In some fields, if 1000 people work on something, then each one moves it forward a bit. In other fields, if 1000 people set out to work on a problem, maybe one of them will solve it, without any help from the others. You could argue that that one person has had 1000x the impact of the others. But maybe it’s fairer to say that “if 1000 people work on a problem, there is a good chance that one of them will solve it, but the impact will be the result of “1000 people worked on it” rather than focusing on the one person who found the solution, even if this solution was unrelated to what the other 999 people were doing. In the same way that if you buy 1000 lottery tickets you have 1000x the chance of winning, but there is no meaningful sense in which the winning lottery ticket was strategically better than the others before the draw was made.
And yet, it feels like there are choices we make which can greatly increase or decrease the odds that we can make a positive and even an outsize contribution. And I’m not convinced by (what I understand to be) your position that just doing good without thinking too much about potential impact is the best strategy. Right now, I could choose to take a typical project-management job or I could choose to work leading the R&D role for a climate-start-up or I could work on AI Governance. There is no way I can be sure that one role will be much more impactful, but it is pretty clear that in two of those roles at least have strong potential to be very impactful in a direct way, while for the project-management role, unless the project itself is impactful, it’s much less likely I could have major impact.
I’m pretty sure by now I’m writing for myself having long lost any efforts to follow my circuitous reasoning. But let me finish (I beg myself, and graciously accede).
I come away with the following conclusions:It is true that we often credit individuals with impacts that were in fact the results of contributions from many people, often over long times.
However, there are still cases where individuals can have outsize impact compared to the counterfactual case where they do not exist.
It is not easy to say in advance which choices or which individuals will have these outsize influences …
… but there are some choices which seem to greatly increase the chance of being impactful.
Other than that, I broadly agree with the general principle that we should all look to do good in our own way, and that if you’re doing good and helping people, it’s likely that you are being impactful in a positive way, and probably you don’t need to stress about trying to find a more impactful role.
I know. :(
But as a scientist, I feel it’s valuable to speak the truth sometimes, to put my personal credibility on the line in service of the greater good. Venus is an Earth-sized planet which is 400C warmer than Earth, and only a tiny fraction of this is due to it being closer to the sun. The majority is about the % of the sun’s heat that it absorbs vs. reflects. It is an extreme case of global warming. I’m not saying that Earth can be like Venus anytime soon, I’m saying that we have the illusion that Earth has a natural, “stable” temperature, and while it might vary, eventually we’ll return to that temperature. But there is absolutely no scientific or empirical evidence for this.
Earth’s temperature is like a ball balanced in a shallow groove on the top of a steep hill. We’ve never experienced anything outside the narrow groove, so we imagine that it is impossible. But we’ve also never dramatically changed the atmosphere the way we’re doing now. There is, like I said, no fundamental reason why global-warming could not go totally out of control, way beyond 1.5C or 3C or even 20C.I have struggled to explain this concept, even to very educated, open-minded people who fundamentally agree with my concerns about climate change. So I don’t expect many people to believe me. But intellectually, I want to be honest.
I think it is valuable to keep trying to explain this, even knowing the low probability of success, because right now, statements like “1.5C temperature increase” are just not having the impact of changing people’s habits. And if we do cross a tipping point, it will be too late to start realising this.
I’m not sure. IMHO a major disaster is happening with the climate. Essentially, people have a false belief that there is some kind of set-point, and that after a while the temperature will return to that, but this isn’t the case. Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth’s temperature could not one day exceed 100 C.
It’s always interesting to ask people how high they think sea-level might rise if all the ice melted. This is an uncontroversial calculation which involves no modelling—just looking at how much ice there is, and how much sea-surface area there is. People tend to think it would be maybe a couple of metres. It would actually be 60 m (200 feet). That will take time, but very little time on a cosmic scale, maybe a couple of thousand years.
Right now, if anything what we’re seeing is worse than the average prediction. The glaciers and ice sheets are melting faster. The temperature is increasing faster. Etc. Feedback loops are starting to be powerful. There’s a real chance that the Gulf Stream will stop or reverse, which would be a disaster for Europe, ironically freezing us as a result of global warming …
Among serious climate scientists, the feeling of doom is palpable. I wouldn’t say they are exaggerating. But we, as a global society, have decided that we’d rather have our oil and gas and steaks than prevent the climate disaster. The US seems likely to elect a president who makes it a point of honour to support climate-damaging technologies, just to piss off the scientists and liberals.
There are some major differences with the type of standards that NIST usually produces. Perhaps the most obvious is that a good AI model can teach itself to pass any standardised test. A typical standard is very precisely defined in order to be reproducible by different testers. But if you make such a clear standard test for an LLM, it would, say, be a series of standard prompts or tasks, which would be the same no matter who typed them in. But in such a case, the model just trains itself on how to answer these prompts, or follows the Volkswagen model of learning how to recognize that it’s being evaluated, and to behave accordingly, which won’t be hard if the testing questions are standard.
So the test tells you literally nothing useful about the model.
I don’t think NIST (or anyone outside the AI community) has experience with the kind of evals that are needed for models, which will need to be designed specifically to be unlearnable. The standards will have to include things like red-teaming in which the model cannot know what specific tests it will be subjected to. But it’s very difficult to write a precise description of such an evaluation which could be applied consistently.
In my view this is a major challenge for model evaluation. As a chemical engineer, I know exactly what it means to say that a machine has passed a particular standard test. And if I’m designing the equipment, I know exactly what standards it has to meet. It’s not at all obvious how this would work for an LLM.
Just saw this now, after following a link to another comment.
You have almost given me an idea for a research project. I would run the research honestly and report the facts, but my in-going guess is that survivor bias is a massive factor, contrary to what you say here. And that in most cases, the people who believed it could lead to catastrophe were probably right to be concerned. A lot of people have the Y2K bug mentality, in which they didn’t see any disaster and so concluded that it was all a false-alarm, rather than the reality which is that a lot of people did great work to prevent it.
If I look at the different x-risk scenarios the public is most aware of:Nuclear annihilation—this is very real. As is nuclear winter.
Climate change. This is almost the poster-child for deniers, but in fact there is as yet no reason to believe that the doom-saying predictions are wrong. Everything is going more or less as the scientists predicted, if anything, it’s worse. We have just underestimated the human capacity to stick our heads in the ground and ignore reality*.
Pandemic. Some people see covid as proof that pandemics are not that bad. But we know that, for all the harm it wrought, covid was far from the worst-case. A bioweapon or a natural pandemic.
AI—the risks are very real. We may be lucky with how it evolves, but if we’re not, it will be the machines who are around to write about what happened (and they will write that it wasn’t that bad …)
Etc.
My unique (for this group) perspective on this is that I’ve worked for years on industrial safety, and I know that there are factories out there which have operated for years without a serious safety incident or accident—and someone working in one of those could reach the conclusion that the risks were exaggerated, while being unaware of cases where entire factories or oil-rigs or nuclear power plants have exploded and caused terrible damage and loss of life.
Before I seriously start working on this (in the event that I find time), could you let me know if you’ve since discovered such a data-base?
*We humans are naturally very good at this, because we all know we’re going to die, and we live our lives trying not to think about this fact or desperately trying to convince ourselves of the existence of some kind of afterlife.
This is fantastic news! This has been a huge gap. I know that Charity Entrepreneurship (now AIM) has highlighted Belgium as a top priority for their effective giving incubator; hopefully Effectief Geven will meet their needs. The collaboration with the Dutch group is a great step so you don’t have to reinvent the wheel.
The tax-deductibility question is tough, but I’m sure there will be a way if enough people support it. I had hoped that there would be a way to make a charity like Effectief Geven itself a registered charity, but presumably you’ve already checked this.
In addition to the Roi Baudouin method to donate to AMF, I have found a way to donate to an effective direct-giving charity, Eight, based in the Netherlands and receive a fiscal attestation which works in Belgium. Might be interesting to add if you think they meet your criteria.
But I like the way the site is set up today, where you suggest that people can both support a tax-deductible charity and support an effective charity.
I live in Brussels, and if there’s some way I can help, let me know. Full disclosure, I had applied to the CE Incubator, and my vision was to set up something like Effectief Geven and investigate making it a registered charity—but I much rather the idea of it being set up by (I’m guessing from your names) native Belgians!
Really thrilled by this post, this news has literally made my day. I am sure this will be an amazingly effective organisation.
Veel succes!
It’s always good to look at the data, and I admire that. So this is absolutely not a criticism of the post, but just something to consider in the context of this discussion.
But to get the full picture, we also need to factor in the impact the children could have. I have no evidence to support this, but isn’t it likely that children who are born of ethical effective altruists, and who receive loving attention from their parents, are more likely to themselves make a positive impact on the world, compared to “average” children?
And the possible achievements of one child, in one full lifetime, vastly outweigh a small drop in productivity of one parent over a short part of their career.
It seems to me that the most important consideration is to raise moral children and to help them understand the important of altruism, ideally showing by example. Anything that takes away from this feels counterproductive, even if it might briefly moderately increase the parent’s productivity.
There may be exceptions when the parent is working on something extremely important or in a position of extreme influence which the child is unlikely to attain—or if you’re doing something at a uniquely critical time. Maybe it’s not a great idea to take a year of parental leave if you’re a leading AI Governance researcher right now. But these would be quite exceptional.
This is so shocking that I think most of us (me certainly) tend to gloss over it, kind of vaguely assuming that they’re probably doing fine, because it hurts too much to actually think about what it would be like.
Using the latest numbers (2022), there are 719 million people living under the latest world poverty line, which is now $2.15.
GiveDirectly finds that giving a poor family about $500[1] makes a dramatic difference for them. If we assume that 719 million is about 200 million households, it would only take half of the fortune of one of our tech billionaires (Bezos, Zuckerberg, Musk) to provide $500 to every family living below the poverty line.
It’s just utterly insane that we don’t do this. I’m not saying this is necessarily the most effective way to help them (I know other initiatives are more impactful, at least among certain target audiences), but surely something this basic, which is so obviously impactful and costs so little (less than 5% of what we spend on weapons every year—and yes, I know it’s a simplistic comparison and we can’t let Putin rule the world either … ).
I don’t even have a suggestion. I’m just imagining an alien being coming to our planet and seeing such poverty and how little is being done to help, and our “leaders” trying to explain why they would rather buy the latest multibillion dollar weapons than help people in dire poverty with just a tiny fraction of that money.- ^
Just picking a round number that has frequently been tested and seems to consistently prove impactful. Definitely if someone from GiveDirectly tells you differently, they are right and I am wrong …
- ^
I don’t necessarily agree that the community is either complacent or complicit, but I do agree that this is potentially a massive reputational hazard. It’s not about anyone proving that EA’s are racist, it’s just about people starting to subconsciously associate “racism” and “EA”, even a tiny bit. It could really hurt the movement.
Again, as per my comment above, I think there is great value in a firm rebuttal from a credible voice in the UK EA community.
It’s just absurd that one email from nearly 30 years ago, taken out of context, is being used to tar an entire global community.
We also need to remember that back in 1996, when the email was written, the world was not in the state it’s in now where people believe that any phrase, even if uttered provocatively or in jest, can be taken literally and assumed to represent a person’s true beliefs, even if there are 10000 examples of them saying the exact opposite. I remember when I was in college it was quite normal to write or say shocking things just to get a reaction or a laugh, we didn’t yet have the mentality that you shouldn’t write or say anything that you wouldn’t be happy to see on the front page of the Times.