ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don’t spend my time on your forum, so please do just pm me if you think I should respond to something)
1.
- regarding blaming Will or benefitting from the media attention
- i don’t think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)
- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures
- given that he only says this after FTX collapsed, I’m saying, it’s annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised
- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position
2. - i’m rather sick of hearing from EAs that i’m arguing in bad faith
- if I wanted to play nasty it wouldn’t be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse to name names in respect to specific actions I criticise or continue to get passed information about, because I want to make sure the debate is not about individuals but about incentives/structures
- a note on me exploiting the moment of FTX to get media attention
- really?
- please join me in speaking with the public or with journalists, you’ll see it’s no fun at all doing it. i have a lot of things i’d rather be doing. many people will be able to confirm that i’ve tried to convince them to speak out too but i failed, likely because
- it’s pretty risky because you end up having rather little control over how your quotes will be used, so you just hope to work with someone who cares, but every journalist has a pre-conception of course. it’s also pretty time consuming with very little impact and then you have to deal with forum debates like this one. but hey if anyone want to join me, I encourage anyone who want to speak to the press to message me and I’ll put you in touch.
- the reason I do it is because I think EA will ‘work’ just not in the way that many good people in it intend it to work
3.
- I indeed agree that these measures are not ‘proven’ to be good because of FTX
- i think they were a good idea before FTX and they continue to be good ideas
- they are not ‘my’ ideas, they are absolutely standard measures against big bureaucracy misconduct
- i don’t want anyone to ‘implement my recommentions’ just because they’re apparently mine (they are not), they are a far bigger project than a single person should handle and my hope was that the EA community would be full of people who’d maybe take it as inpiration and do something with it in their local context—it would then be their implmentation.
- i like the responses I had on twitter that were saying that FTX was in fact the first to do re-granting
- I agree and I thought that was great!
- in fact they were interested in funding a bunch of projects I care a lot about, including a whole section on ‘epistemics’! I’m not sure it was done for the right reasons (maybe the incentive to spend money fast was also at play), and the re-granting was done without any academic rigor, data collection or metrics about how well it works (as far as I know), but I was still happy to see it
- I don’t see how this invalidates the claim that re-granting is a good idea though
4.
- those who only want to know if my recommendations would have prevented this specific debacle are missing the point. someone may have blown the whistle, some transparency may have helped raise alarms, fewer people may have accepted the money, distributed funding may have meant more risk averse people would have had a say about whether to accept the money—or not. risk reduction is about reduction, not bringing it down to 0. so, do those measures, depending on how they’re set up, reduce risk? yes I can see how they would, e.g. is it true that there were slack messages on some slack for leaders which warned against SBF, or is it true that several orgisations decided (but don’t disclose why) against taking FTX funding https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity? I don’t know enough about the people involved to say what each would have needed to be incentivised to be more public about their concerns. but do you not think it would have been useful knowledge to have available, e. g. for those EA members who got indiv grants and made plans with those grants?
even if institutional measures would not have prevented the FTX case, they are likely to catch a whole host of other risks in the future.
5.
-The big mistake that I am making is to not be an EA but to comment on EA. It makes me vulnerable to the attack of “your propositions are not concrete enough to fix our problems, so you must be doing it to get attention?” I am not here trying to fix your problems.
- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as ‘humanity’ or ‘future beings’. That means that even if I don’t want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it’s not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.
6.
- i don’t say anywhere that ‘every procedure ought to be fully democratised’ or ‘every organisation has to have its own whistleblower protection scheme’ - do i?
- *clearly* these are broad arguments, geered towards starting a discussion across EA, within EA institutions that need to be translated into concrete proposals and adjustments and assessments that meet each contextual need
- there’s no need to dismiss the question of what procedures actually lead to the best epistemic outcomes by arguing that ‘democratising everything’ would bring bureaucracy (of course it would and no one is arguing for that anyway)
- for all the analyses of my tweets, please also look at the top page of the list of recommendations for reforms , it says something like “clearly this needs to be more detailed to be relevant but I’ll only put in my free time if I have reason to believe it will be worth my time”. There was no interest by Will and his team to follow up with any of it, so I left it at that (i had sent another email after the meeting with some more concrete steps necessary to at least get data, do some prototyping and reserach to test some of my claims about decentralised funding, and in which I offered I could provide advice and help out but that they should employ someone else to actually lead the project). Will said he was busy and would forward it to his team. I said ‘please reach out if you have any more questions’ and never heard from anyone again. It won’t be hard to come up with concrete experiments/ideas for a specific context/organisation/task/team but I’m not sure why it would be productive for me to do that publically rather than at the request of a specific organisation/team. If you’re an EA who cares about EA having those measures in place, please come up with those implemenation details for your community yourself.
7.
- I’d be very happy to discuss details of actually implementing some of these proposals for some particular contexts in which I believe it makes sense to try them. I’d be very happy to consult organizations that are trying to make steps in those directions. I’d be very happy to engage with and see a theoretical discussion about the actual state of the reserach.
But none of the discussions that I’ve seen so far are actually on the level of detail that would match the forefront of the experimental data and scholarly work that I’ve seen so far. Do you think scholars of democratic theory have not yet thought about a response to the typical ‘but most people are stupid’? Everyone who dismisses decentralised reasoning as a viable and epistemically valuable approach, should at least engage with the arguments by political scientists (I’ve cited a bunch in previous publications/twitter, here again, e.g. Landemore, Hong&Page are a good start) who spent years on these questions (ie not me) and then argue on their level to bring the debate forward if they then still think they can.
8.
Jan, you seem particularly unhappy with me, reach out if you like, I’m happy to have a chat or answer some more questions.
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
I have read the Democratizing Risk paper that got EA criticism and think it was spot on. Not having ever been very popular anywhere (I get by on being “helpful” or “ignorable”), I use my time here to develop knowledge.
Your work and contributions could have good timing right now. You also have credentials and academic papers, all useful to establish your legitimacy for this audience. It might be useful to check to what extent TUA had to do with the FTX crisis, and whether a partitioning of EA ideologies combines or separates the two.
I believe that appetite for risk and attraction to betting is part and parcel of EA, as is a view informed more by wealth than by poverty. This speaks to appetite for financial risk and dissonance about charitable funding.
Critiques of EA bureaucracy could have more impact than critiques of EA ideology. Certainly your work with Luke Kemp on TUA seems like a hard sell for this audience, but I would welcome another round, there’s a silent group of forum readers who could take notice of your effort.
Arguments against TUA visions of AGI just get an ignoring shrug here. Climate change is about as interesting to these folks as the threat of super-fungi. Not very interesting. Maybe a few 100 points on one post, if the author speaks “EA” or is popular. I do think the reasons are ideological rather than epistemic, though ideologies do act as an epistemic filter (as in soldier mindset).
It would be a bit rude to focus on a minor part of your comment after you posted such a comprehensive reply, so I first want to say that I agreed with some of the points.
With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives:
- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as ‘humanity’ or ‘future beings’. That means that even if I don’t want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it’s not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.
The problem I have with this framing is that it “punishes” EA (by applying isolated demands of “justify yourselves”) for its ambitious attempts to improve the world, while other groups of people (or other ideologies) (presumably?) don’t have to justify their inaction. And these demands come at a time when EA doesn’t even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see “let’s scrutinize EA and EAs” as a main priority!)
The longtermist EA worldview explicitly says that the world is broken and on a bad trajectory, so that we’re pretty doomed if nothing changes soon. (Or, at least it says that we’re running an unjustified level of unnecessary risks if we don’t change anything soon – not all EAs are of the view that existential risks are >10%.)
If this worldview is correct, then what you’re demanding is a bit like going up to Frodo and his companions and stalling them to ask a long list of questions about their decision procedures and how much they’ve really thought through about what they’re going to do once they have achieved more of their aims, and if they’d not rather consult with the populations of the Shire or other lands in Middle earth to decide what they should do. All of that at a time when the companionship is still mostly just in Rivendell planning for future moves (and while Sauron is getting ready to kill lots of humans, elves, and dwarves).
Of course, the communists also claimed that the world is broken when they tried to convince more people to join them and seek influence to make the world better. So, I concede that it’s not a priori unreasonable to consider it urgent and morally pressing, when you come across a group of self-proclaimed world-improvers on an upwards trajectory (in terms of real-world influence), to scrutinize if they have their head together and have the integrity needed to in expectation change things for the better rather than for the worse.
The world is complicated; it matters to get things right. Sometimes self-proclaimed world improvers are the biggest danger, but sometimes the biggest danger is the reason why self-proclaimed world improvers are hurrying around doing stuff and appear kind of desperate. You can be wrong in both directions:
slow down* Frodo at a point where it’s (a) anyway unlikely that he’ll succeed and (b) totally a dumb use of time (edit) to focus on things that only ever become a priority if Middle earth survives Sauron, (edit_end) given the imminent threat of Sauron
fail to apply scrutiny to the early Marxists despite (a) there were already signs of them becoming uncannily memetically successful with a lot of resentfulness in the societal undercurrent (which is hard to control) and (b) the “big threat” was ‘just’ Capitalism and not Sauron. (One thing to say about Capitalism is “it works,” and it seems potentially very risky to mess with systems that work.)
*Not all types of criticism / suggestions for improvement are an instance of “slowing down.” What I’m criticizing here is the attitude of “you owe us answers” rather than “here’s some criticism, would be curious for replies, especially if more people agree with my criticism (in which case the voices calling replies will automatically grow/become louder).”
Journalists will often jump towards the perspective** that’s negative and dismissive of EA concerns because that fits into existing narratives and because journalist haven’t thought about the EA worldview in detail (and their primary reaction to things like AI risk is often driven by absurdity heuristics rather than careful engagement). You, by contrast, have thought through these things. So, I’d say it’s on you to make an attempt to at least present EA in a fair light – though I of course understand that, as a critic of EA, it’s reasonable that this isn’t your main priority. (And maybe you’ve tried this – I understand it’s hard to get points across with some journalists.)
**One unfortunate thing about some of the reporting on EA is also that journalists sometimes equate EA with “Silicon valley tech culture,” even though the latter is arguably what EAs are to some degree in tension with (AI capabilities research and “tech progress too fast before wisdom/foresight can catch up.”) That makes EA seem powerful so you can punch upwards at it, when it fact EA is still comparatively small. (And smaller now after recent events.)
I can understand why you mightn’t trust us, but I would encourage EA’s to consider that we need to back ourselves, even though I’ve certainly been shaken by the whole FTX fiasco. Unfortunately, there’s an adverse selection effect where the least trustworthy actors are unlikely to recurse themselves in terms of influence, so if the more trustworthy actors recurse themselves, we will end up with the least responsible actors in control.
So despite the flaws I see with EA, I don’t really see any choice apart from striving as hard as we can to play our part in building a stronger future. After all, the perfect is the enemy of the good. And if the situation changes such that there are others better equipped than us to handle these issues and who would not benefit from our assistance, we should of course recurse ourselves, but sadly I believe this is unlikely to happen.
I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach?
Even if every actor in EA agreed to limit its power, we wouldn’t be able to limit the power of actors outside of EA. This is the adverse selection effect.
This means that we need to carefully consider the cost-benefit trade off in proposals to limit the power of groups. In some cases, ie. seeing how the FTX fiasco was a larger systematic risk, it’s clear that there’s a need for more oversight. In other cases, it’s more like the analogy of putting Frodo’s quest on hold until we’ve conducted an opinion survey of Middle Earth.
(Update: Upon reflection, this comment makes me sound like I’m more towards ‘just do stuff’ then I am. I think we need to recognise that we can’t assume someone is perfectly virtuous just because they’re an EA, but I also want us to retain the characteristics of a high trust community (and we have to check up on every little decision is a characteristic of a low trust community).
Thanks. That argument makes sense on the assumption that a given reform would reduce EA’s collective power as opposed to merely redistributing it within EA.
Indeed Lukas, I guess what I’m saying is: given what I know about EA, I would not entrust it with the ring
I don’t understand what this means, exactly.
If you’re talking about the literal one ring from LOTR, then yeah EA not being trustworthy is vacuously true, since no human without mental immunity feats can avoid being corrupted.
With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives:
“—I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as ‘humanity’ or ‘future beings’. That means that even if I don’t want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it’s not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.”
The problem I have with this framing is that it “punishes” EA (by applying isolated demands of “justify yourselves”) for its ambitious attempts to improve the world, while other groups of people (or other ideologies) (presumably?) don’t have to justify their inaction. And these demands come at a time when EA doesn’t even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see “let’s scrutinize EA and EAs” as a main priority!)
Immoral?This is a surprising descriptor to see used here. The standard of “justify yourselves” to a community soup kitchen, or some other group / ideology is very different to the standard of “justify yourselves” to a movement apparently dedicated to doing the most good it can for those who need it most / all humans / all sentient beings / all sentience that may exist in the far future. The decision relevant point shouldn’t be “well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, asking EA to reach it is an isolated demand for rigour, and creates terrible incentives.” Like—what follows? Are you suggesting we should then ignore this because other groups don’t do this? Or because critics of EA don’t symmetrically apply these criticisms to all groups around the world?
The questions (imo) should be something like—are these actions beneficial in helping EA be more impactful?[1] Are there other ways of achieving the same goals better than what’s proposed? Are any of these options worth the costs? I don’t see why other groups’ inaction justifies EA’s, if it’s the case that these actions are in fact beneficial.
And these demands come at a time when EA doesn’t even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see “let’s scrutinize EA and EAs” as a main priority!)
If EA wants to be in a position to work out the constitution of a world government about to be installed, it needs to first show outsiders that it’s more than a place of interesting intellectual ideas, but a place that can be trusted to come up with interventions and solutions that will actually work in practice. If the standard for “scrutinising EA” is when EA is about to work out the constitution of a world government about to be installed, it is probably already too late.
What I’m criticizing here is the attitude of “you owe us answers” rather than “here’s some criticism, would be curious for replies, especially if more people agree with my criticism (in which case the voices calling replies will automatically grow/become louder).”
I don’t want to engage in a discussion about the pros and cons of the Democratising Risk paper, but from an outsider’s perspective it seems pretty clear to me that Carla did engage in a good faith “EA-insider” way, even if you don’t think she’s expressing criticism in a way you like now. But again—if you think EA is actually analogous to Frodo and responsible for saving the world, of course it would be reasonable for outsiders to take strong interest in what your plan is, and where it might go wrong, or be concerned about any unilateral actions you might take—they are directly impacted by what you choose to do with the ring, they might be in a position to greatly help or hinder you. For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
More generally, EA should remain open to criticism that isn’t delivered at your communication norms, and risks leaving value on the table if it ignores criticism solely because it isn’t expressed in an attitude that you prefer.
e.g. via more trust within the community at those who are steering it, more trust from external donors, more trust from stakeholders who are affected by EA’s goals, or some other way?
Immoral? This is a really surprising descriptor to see used here.
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I’m annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply “Oh, you care for the future of all humans, and even animals? That’s suspicious – we’re definitely going to apply extra scrutiny towards you.” Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,”Are EAs following democratic processes and why does their funding come from very few sources?” is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
The question shouldn’t be “well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, asking EA to reach it is an isolated demand for rigour, and creates terrible incentives.” Like—what follows?
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that “it’s important to have good institutions” is something EA owes to outsiders is what seems weird to me. Doesn’t this framing kind of suggest that EAs couldn’t motivate themselves to try their best if it weren’t for “institutional safeguards.” What a depressing view of humans, that they can only act according to their stated ideals if they’re watched at every step and have to justify themselves to critics!
EAs have discussions about governance issues EA-internally, too. It’s possible (in theory) that EA has as many blindspots as Zoe thinks, but it’s also possible that Zoe is wrong (or maybe it’s something in between). Either way, I don’t think anyone in EA, nor “EA” as a movement, has any obligation to engage in great detail with Zoe’s criticisms if they don’t think that’s useful.* (Not to say that they don’t consider the criticism useful – my impression is that there are EAs on both sides, and that’s fine!)
If a lot of people agree with Zoe’s criticism, that creates more social pressure to answer to her points. That’s probably a decent mechanism to determine what an “appropriate” level of minimally-mandatory engagement should be – though it depends a bit whether the social pressure comes from well-intentioned people who reasonably informed about the issues or whether some kind of “let’s all pile on these stupid EAs” dynamics emerge. (So far, the dynamics seem healthy to me, but if EA keeps getting trashed in the media, then this could change.)
*(I guess if someone’s impression of EA was “group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future,” then it would be reasonable for them to go like, “wtf, if that’s your movement’s plan, I’m concerned!” However, that would be a strawman impression of EA. Most EAs endorse moral views according to which individual preferences matter and “eudaimonia” is basically “everyone gets what they most want.” Besides, even the few hedonist utilitarians [or negative utilitarians] within EA think preferences matter and argue for being nice to others with different views.)
The questions should just be—are these actions beneficial in helping EA be more impactful? [1] Are there other ways of achieving the same goals better than what’s proposed? Are any of these options worth the costs? I don’t see why other groups’ inaction justifies EA’s, if it’s the case that these actions are in fact beneficial.
I don’t disagree with this part. I definitely think it’s wise for EAs to engage with critics, especially thoughtful critics, which I consider Zoe to be one of the best examples of, despite disagreeing with probably at least 50% of her specific suggestions.
I don’t want to engage in a discussion about the pros and cons of the Democratising Risk paper, but from an outsider’s perspective it seems pretty clear to me that Carla did engage in a good faith “EA-insider” way, even if you don’t think she’s expressing criticism in a way you like now.
While I did use the word “immoral,” I was only commenting on the framing Zoe/Carla used in that one particular paragraph I quoted. I definitely wasn’t describing her overall behavior!
In case you want my opinion, I am a bit concerned that her rhetoric is often a bit “sensationalist” in a nuance-lacking way, and this makes EA look bad to journalists in a way I consider uncalled for. But I wouldn’t label that “acting in bad-faith;” far from it!
But again—if you think EA is actually analogous to Frodo and responsible for saving the world, of course it would be reasonable for outsiders to take interest in what your plan is, and where it might go wrong—they are directly impacted by what you choose to do with the ring, they might be in a position to greatly help or hinder you. For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
Yeah, I agree with all of that. Still, in the end, it’s up to EAs themselves to decide which criticisms to engage with at length and where it maybe isn’t so productive.
For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
In the books (or the movies), this part is made easy by having a kind and wise old wizard – who wouldn’t consider going with Gandalf’s advice a defensible decision-procedure!
In reality, “who gets to wield power” is more complicated. But one important point in my original comment was that EA doesn’t even have that much power, and no ring (nor anything analogous to it – that’s a place where the analogy breaks). So, it’s a bit weird to subject EA to as much scrutiny as would be warranted if they were about to enshrine their views into the constitution of a world government. All longtermist EA is really trying to do right now is trying to ensure that people won’t be dead soon so that there’ll be the option to talk governance and so on later on. (BTW, I do expect EAs to write up proposals for visions of AI-aided ideal governance at some point. I think that’s good to have and good to discuss. I don’t see it as the main priority right now because EAs haven’t yet made any massive bids for power in the world. Besides, it’s not like whatever the default would otherwise be has much justification. And you could even argue that EAs have done the most so far out of any group promoting discourse about important issues related to fair governance of the future.)
Thanks for sharing! We have some differing views on this which I will focus on—but I agree with much of what you say and do appreciate your thoughts + engagement here.
Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply “Oh, you care for the future of all humans, and even animals? That’s suspicious – we’re definitely going to apply extra scrutiny towards you.” Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,”Are EAs following democratic processes and why does their funding come from very few sources?” is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
It sounds like you are getting the impression that criticism directed at EA indicates that people criticising EA think this is a larger issue than AI capabilities or widespread apathy etc, if they aren’t spending their time lobbying against those larger issues. But there might be other explanations for their focus—any given individual’s sphere of influence, tractability, personal identity, and others can all be factors that contribute here.
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that “it’s important to have good institutions” is something EA owes to outsiders is what seems weird to me. Doesn’t this framing kind of suggest that EAs couldn’t motivate themselves to try their best if it weren’t for “institutional safeguards.”
“It’s important to have good institutions” is clearly something that “serious EAs” are strongly incentivised to do. But people who have a lot of power and influence and funding also face incentives to maintain a status quo that they benefit from. EA is no different, and people seeking to do good are not exempt from these kinds of incentives. And EAs who are serious about things should acknowledge that they are subject to these incentives, as well as the possibility that one reason outsiders might be speaking up about this is because they think EAs aren’t taking the problem seriously enough. The benefit of the outside critic is NOT that EAs have some special obligation towards them (though, in this case, if your actions directly impact them, then they are a relevant stakeholder that is worth considering), but because they are somewhat removed and may be able to provide some insight into an issue that is harder for you to see when you are deeply surrounded by other EAs and people who are directly mission / value-aligned.
What a depressing view of humans, that they can only act according to their stated ideals if they’re watched at every step and have to justify themselves to critics!
I think this goes too far, I don’t think this is the claim being made. The standard is just “would better systems and institutional safeguards better align EA’s stated ideals and what happens in practice? If so, what would this look like, and how would EA organisations implement these?”. My guess is you probably agree with this though?
Either way, I don’t think anyone in EA, nor “EA” as a movement, has any obligation to engage in great detail
I guess if someone’s impression of EA was “group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future”
Nitpick: while I agree that it would be a strawman, it isn’t the only scenario for outsiders to be concerned. There are also people who disagree with some longtermists vision of the future, there are people who think EA’s general approach is bad, and it could follow that those people will think $$ on EA causes are poorly spent and should be spent in [some different way]. There are also people who think EA is a talent drain away from important issues. Of course, this doesn’t interact with the extent to which EA is “obligated” to respond, especially because many of these takes aren’t great. I agree that there’s no obligation, per se. But the claim is “outsiders are permitted to ASK you to fix your problems”, not that you are obligated to respond (though subsequent sentences RE: “I can demand” or “you should” might be a source of miscommunication).
I guess the way I see it is something like—EA isn’t obligated to respond to any outsider criticism, but if you want to be taken seriously by these outsiders who have these concerns, if you want buy-in from people who you claim to be working with and working for, if you don’t want people at social entrepreneurship symposiums seriously considering questions like “Is the way to do the most good to destroy effective altruism?”, then it could be in your best interest to take good-faith criticisms and concerns seriously, even if the attitude comes across poor, because it likely reflects some barrier in you achieving your goals. But I think there probably isn’t much disagreement between us here.
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I’m annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply “Oh, you care for the future of all humans, and even animals? That’s suspicious – we’re definitely going to apply extra scrutiny towards you.” Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,”Are EAs following democratic processes and why does their funding come from very few sources?” is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
I think this is an undervalued idea. But I also think that there’s a distinct but closely related idea, which is valuable, which is that for any Group X with Goal Y, it is nearly always instrumentally valuable for Group X to hear about suggestions about how it can better advance Goal Y, especially from those who believe that Goal Y is valuable. Sometimes this will read as (or have the effect of) disincentivizing adopting Goal Y (because it leads to criticism), but in fact it’s often much easier to marginally improve the odds of Goal Y being achieved by attempting to persuade Group X to do better at Y than to persuade Group ~X who believes ~Y. I take Carla Zoe to be doing this good sort of criticism, or at least that’s the most valuable way to read her work.
I would also point out that I think the proposition that ” that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk” is both:
Probably undesirable to implement in practice because any criticism will have some disincentivizing effect.
Probably violated by your comment itself, since I’d guess that any normal person would be disincentivized to some extent by engaging in constructive criticism (above the baseline of apathy or jerkiness) that is likely to be labeled as immoral.
This is just to say that I value the general maxim you’re trying to advance here, but “never” is way too strong. Then it’s just a boring balancing question.
“Never” is too strong, okay. But I disagree with your second point. I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.) I don’t feel like I was discouraging criticism. Basically, my point wasn’t about the act of criticizing at all, it was only about an added expectation that went with it, which I’d paraphrase as “EAs are doing something wrong unless they answer to my concerns point by point.”
I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.)
Ah, okay. That seems more reasonable. Sorry for misunderstanding.
I agree insofar as status as an intended EA beneficiary does not presumptively provide someone with standing demand answers from EA about risk management. However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
I think the LOTR analogy is inapt. Taking Zoe’s comment here at face value, she is not suggesting that everyone put Project Mount Doom on hold until the Council of Elrond runs some public-opinion surveys. She is suggesting that reform ideas warrant further development and discussion. That’s closer to asking for some time of a mid-level bureaucrat at Rivendell and a package of lembas than diverting Frodo. Yes, it may be necessary to bring Frodo in at some point, but only if preliminary work suggests it would be worthwhile to do so.
I recognize that there could be some scenarios in which the utmost single-mindedness is essential: the Nagzul have been sighted near the Ringbearer. But other EA decisions don’t suggest that funders and leaders are at Alert Condition Nagzul. For example, while I don’t have a clear opinion on the Wytham purchase, it seems to have required a short-term expenditure of time and lock-up of funds for an expected medium-to-long-run payoff.
However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it’s a leap of logic to go from “because your stated ambition is to do risk analysis for all of us” to “That means that even if I don’t want to wear your brand, I can demand that you answer the questions of [...]” – even if we add the hidden premise “this is about expected harms caused by EA.” Just because EA does “risk analysis for all sentient beings” doesn’t mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it’s far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about “you fail at your aims” rather than “you pose a risk to all of us,” then my initial point still applies, that EA doesn’t have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you “you fail at your aims.”
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed—in a causal sense—to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as “bureaucracy,” which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.
ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don’t spend my time on your forum, so please do just pm me if you think I should respond to something)
1.
- regarding blaming Will or benefitting from the media attention
- i don’t think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)
- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures
- given that he only says this after FTX collapsed, I’m saying, it’s annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised
- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position
2.
- i’m rather sick of hearing from EAs that i’m arguing in bad faith
- if I wanted to play nasty it wouldn’t be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse to name names in respect to specific actions I criticise or continue to get passed information about, because I want to make sure the debate is not about individuals but about incentives/structures
- a note on me exploiting the moment of FTX to get media attention
- really?
- please join me in speaking with the public or with journalists, you’ll see it’s no fun at all doing it. i have a lot of things i’d rather be doing. many people will be able to confirm that i’ve tried to convince them to speak out too but i failed, likely because
- it’s pretty risky because you end up having rather little control over how your quotes will be used, so you just hope to work with someone who cares, but every journalist has a pre-conception of course. it’s also pretty time consuming with very little impact and then you have to deal with forum debates like this one. but hey if anyone want to join me, I encourage anyone who want to speak to the press to message me and I’ll put you in touch.
- the reason I do it is because I think EA will ‘work’ just not in the way that many good people in it intend it to work
3.
- I indeed agree that these measures are not ‘proven’ to be good because of FTX
- i think they were a good idea before FTX and they continue to be good ideas
- they are not ‘my’ ideas, they are absolutely standard measures against big bureaucracy misconduct
- i don’t want anyone to ‘implement my recommentions’ just because they’re apparently mine (they are not), they are a far bigger project than a single person should handle and my hope was that the EA community would be full of people who’d maybe take it as inpiration and do something with it in their local context—it would then be their implmentation.
- i like the responses I had on twitter that were saying that FTX was in fact the first to do re-granting
- I agree and I thought that was great!
- in fact they were interested in funding a bunch of projects I care a lot about, including a whole section on ‘epistemics’! I’m not sure it was done for the right reasons (maybe the incentive to spend money fast was also at play), and the re-granting was done without any academic rigor, data collection or metrics about how well it works (as far as I know), but I was still happy to see it
- I don’t see how this invalidates the claim that re-granting is a good idea though
4.
- those who only want to know if my recommendations would have prevented this specific debacle are missing the point. someone may have blown the whistle, some transparency may have helped raise alarms, fewer people may have accepted the money, distributed funding may have meant more risk averse people would have had a say about whether to accept the money—or not. risk reduction is about reduction, not bringing it down to 0. so, do those measures, depending on how they’re set up, reduce risk? yes I can see how they would, e.g. is it true that there were slack messages on some slack for leaders which warned against SBF, or is it true that several orgisations decided (but don’t disclose why) against taking FTX funding https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity? I don’t know enough about the people involved to say what each would have needed to be incentivised to be more public about their concerns. but do you not think it would have been useful knowledge to have available, e. g. for those EA members who got indiv grants and made plans with those grants?
even if institutional measures would not have prevented the FTX case, they are likely to catch a whole host of other risks in the future.
5.
-The big mistake that I am making is to not be an EA but to comment on EA. It makes me vulnerable to the attack of “your propositions are not concrete enough to fix our problems, so you must be doing it to get attention?” I am not here trying to fix your problems.
- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as ‘humanity’ or ‘future beings’. That means that even if I don’t want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it’s not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.
6.
- i don’t say anywhere that ‘every procedure ought to be fully democratised’ or ‘every organisation has to have its own whistleblower protection scheme’ - do i?
- *clearly* these are broad arguments, geered towards starting a discussion across EA, within EA institutions that need to be translated into concrete proposals and adjustments and assessments that meet each contextual need
- there’s no need to dismiss the question of what procedures actually lead to the best epistemic outcomes by arguing that ‘democratising everything’ would bring bureaucracy (of course it would and no one is arguing for that anyway)
- for all the analyses of my tweets, please also look at the top page of the list of recommendations for reforms , it says something like “clearly this needs to be more detailed to be relevant but I’ll only put in my free time if I have reason to believe it will be worth my time”. There was no interest by Will and his team to follow up with any of it, so I left it at that (i had sent another email after the meeting with some more concrete steps necessary to at least get data, do some prototyping and reserach to test some of my claims about decentralised funding, and in which I offered I could provide advice and help out but that they should employ someone else to actually lead the project). Will said he was busy and would forward it to his team. I said ‘please reach out if you have any more questions’ and never heard from anyone again. It won’t be hard to come up with concrete experiments/ideas for a specific context/organisation/task/team but I’m not sure why it would be productive for me to do that publically rather than at the request of a specific organisation/team. If you’re an EA who cares about EA having those measures in place, please come up with those implemenation details for your community yourself.
7.
- I’d be very happy to discuss details of actually implementing some of these proposals for some particular contexts in which I believe it makes sense to try them. I’d be very happy to consult organizations that are trying to make steps in those directions. I’d be very happy to engage with and see a theoretical discussion about the actual state of the reserach.
But none of the discussions that I’ve seen so far are actually on the level of detail that would match the forefront of the experimental data and scholarly work that I’ve seen so far. Do you think scholars of democratic theory have not yet thought about a response to the typical ‘but most people are stupid’? Everyone who dismisses decentralised reasoning as a viable and epistemically valuable approach, should at least engage with the arguments by political scientists (I’ve cited a bunch in previous publications/twitter, here again, e.g. Landemore, Hong&Page are a good start) who spent years on these questions (ie not me) and then argue on their level to bring the debate forward if they then still think they can.
8.
Jan, you seem particularly unhappy with me, reach out if you like, I’m happy to have a chat or answer some more questions.
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
Thank you for taking the time to write this up, it is encouraging—I also had never thought to check my karma …
I have read the Democratizing Risk paper that got EA criticism and think it was spot on. Not having ever been very popular anywhere (I get by on being “helpful” or “ignorable”), I use my time here to develop knowledge.
Your work and contributions could have good timing right now. You also have credentials and academic papers, all useful to establish your legitimacy for this audience. It might be useful to check to what extent TUA had to do with the FTX crisis, and whether a partitioning of EA ideologies combines or separates the two.
I believe that appetite for risk and attraction to betting is part and parcel of EA, as is a view informed more by wealth than by poverty. This speaks to appetite for financial risk and dissonance about charitable funding.
Critiques of EA bureaucracy could have more impact than critiques of EA ideology. Certainly your work with Luke Kemp on TUA seems like a hard sell for this audience, but I would welcome another round, there’s a silent group of forum readers who could take notice of your effort.
Arguments against TUA visions of AGI just get an ignoring shrug here. Climate change is about as interesting to these folks as the threat of super-fungi. Not very interesting. Maybe a few 100 points on one post, if the author speaks “EA” or is popular. I do think the reasons are ideological rather than epistemic, though ideologies do act as an epistemic filter (as in soldier mindset).
It would be a bit rude to focus on a minor part of your comment after you posted such a comprehensive reply, so I first want to say that I agreed with some of the points.
With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives:
The problem I have with this framing is that it “punishes” EA (by applying isolated demands of “justify yourselves”) for its ambitious attempts to improve the world, while other groups of people (or other ideologies) (presumably?) don’t have to justify their inaction. And these demands come at a time when EA doesn’t even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see “let’s scrutinize EA and EAs” as a main priority!)
The longtermist EA worldview explicitly says that the world is broken and on a bad trajectory, so that we’re pretty doomed if nothing changes soon. (Or, at least it says that we’re running an unjustified level of unnecessary risks if we don’t change anything soon – not all EAs are of the view that existential risks are >10%.)
If this worldview is correct, then what you’re demanding is a bit like going up to Frodo and his companions and stalling them to ask a long list of questions about their decision procedures and how much they’ve really thought through about what they’re going to do once they have achieved more of their aims, and if they’d not rather consult with the populations of the Shire or other lands in Middle earth to decide what they should do. All of that at a time when the companionship is still mostly just in Rivendell planning for future moves (and while Sauron is getting ready to kill lots of humans, elves, and dwarves).
Of course, the communists also claimed that the world is broken when they tried to convince more people to join them and seek influence to make the world better. So, I concede that it’s not a priori unreasonable to consider it urgent and morally pressing, when you come across a group of self-proclaimed world-improvers on an upwards trajectory (in terms of real-world influence), to scrutinize if they have their head together and have the integrity needed to in expectation change things for the better rather than for the worse.
The world is complicated; it matters to get things right. Sometimes self-proclaimed world improvers are the biggest danger, but sometimes the biggest danger is the reason why self-proclaimed world improvers are hurrying around doing stuff and appear kind of desperate. You can be wrong in both directions:
slow down* Frodo at a point where it’s (a) anyway unlikely that he’ll succeed and (b) totally a dumb use of time (edit) to focus on things that only ever become a priority if Middle earth survives Sauron, (edit_end) given the imminent threat of Sauron
fail to apply scrutiny to the early Marxists despite (a) there were already signs of them becoming uncannily memetically successful with a lot of resentfulness in the societal undercurrent (which is hard to control) and (b) the “big threat” was ‘just’ Capitalism and not Sauron. (One thing to say about Capitalism is “it works,” and it seems potentially very risky to mess with systems that work.)
*Not all types of criticism / suggestions for improvement are an instance of “slowing down.” What I’m criticizing here is the attitude of “you owe us answers” rather than “here’s some criticism, would be curious for replies, especially if more people agree with my criticism (in which case the voices calling replies will automatically grow/become louder).”
Journalists will often jump towards the perspective** that’s negative and dismissive of EA concerns because that fits into existing narratives and because journalist haven’t thought about the EA worldview in detail (and their primary reaction to things like AI risk is often driven by absurdity heuristics rather than careful engagement). You, by contrast, have thought through these things. So, I’d say it’s on you to make an attempt to at least present EA in a fair light – though I of course understand that, as a critic of EA, it’s reasonable that this isn’t your main priority. (And maybe you’ve tried this – I understand it’s hard to get points across with some journalists.)
**One unfortunate thing about some of the reporting on EA is also that journalists sometimes equate EA with “Silicon valley tech culture,” even though the latter is arguably what EAs are to some degree in tension with (AI capabilities research and “tech progress too fast before wisdom/foresight can catch up.”) That makes EA seem powerful so you can punch upwards at it, when it fact EA is still comparatively small. (And smaller now after recent events.)
Indeed Lukas, I guess what I’m saying is: given what I know about EA, I would not entrust it with the ring .
I can understand why you mightn’t trust us, but I would encourage EA’s to consider that we need to back ourselves, even though I’ve certainly been shaken by the whole FTX fiasco. Unfortunately, there’s an adverse selection effect where the least trustworthy actors are unlikely to recurse themselves in terms of influence, so if the more trustworthy actors recurse themselves, we will end up with the least responsible actors in control.
So despite the flaws I see with EA, I don’t really see any choice apart from striving as hard as we can to play our part in building a stronger future. After all, the perfect is the enemy of the good. And if the situation changes such that there are others better equipped than us to handle these issues and who would not benefit from our assistance, we should of course recurse ourselves, but sadly I believe this is unlikely to happen.
I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach?
Even if every actor in EA agreed to limit its power, we wouldn’t be able to limit the power of actors outside of EA. This is the adverse selection effect.
This means that we need to carefully consider the cost-benefit trade off in proposals to limit the power of groups. In some cases, ie. seeing how the FTX fiasco was a larger systematic risk, it’s clear that there’s a need for more oversight. In other cases, it’s more like the analogy of putting Frodo’s quest on hold until we’ve conducted an opinion survey of Middle Earth.
(Update: Upon reflection, this comment makes me sound like I’m more towards ‘just do stuff’ then I am. I think we need to recognise that we can’t assume someone is perfectly virtuous just because they’re an EA, but I also want us to retain the characteristics of a high trust community (and we have to check up on every little decision is a characteristic of a low trust community).
Thanks. That argument makes sense on the assumption that a given reform would reduce EA’s collective power as opposed to merely redistributing it within EA.
I don’t understand what this means, exactly.
If you’re talking about the literal one ring from LOTR, then yeah EA not being trustworthy is vacuously true, since no human without mental immunity feats can avoid being corrupted.
Immoral? This is a surprising descriptor to see used here.
The standard of “justify yourselves” to a community soup kitchen, or some other group / ideology is very different to the standard of “justify yourselves” to a movement apparently dedicated to doing the most good it can for those who need it most / all humans / all sentient beings / all sentience that may exist in the far future. The decision relevant point shouldn’t be “well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, asking EA to reach it is an isolated demand for rigour, and creates terrible incentives.” Like—what follows? Are you suggesting we should then ignore this because other groups don’t do this? Or because critics of EA don’t symmetrically apply these criticisms to all groups around the world?
The questions (imo) should be something like—are these actions beneficial in helping EA be more impactful?[1] Are there other ways of achieving the same goals better than what’s proposed? Are any of these options worth the costs? I don’t see why other groups’ inaction justifies EA’s, if it’s the case that these actions are in fact beneficial.
If EA wants to be in a position to work out the constitution of a world government about to be installed, it needs to first show outsiders that it’s more than a place of interesting intellectual ideas, but a place that can be trusted to come up with interventions and solutions that will actually work in practice. If the standard for “scrutinising EA” is when EA is about to work out the constitution of a world government about to be installed, it is probably already too late.
I don’t want to engage in a discussion about the pros and cons of the Democratising Risk paper, but from an outsider’s perspective it seems pretty clear to me that Carla did engage in a good faith “EA-insider” way, even if you don’t think she’s expressing criticism in a way you like now. But again—if you think EA is actually analogous to Frodo and responsible for saving the world, of course it would be reasonable for outsiders to take strong interest in what your plan is, and where it might go wrong, or be concerned about any unilateral actions you might take—they are directly impacted by what you choose to do with the ring, they might be in a position to greatly help or hinder you. For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
More generally, EA should remain open to criticism that isn’t delivered at your communication norms, and risks leaving value on the table if it ignores criticism solely because it isn’t expressed in an attitude that you prefer.
e.g. via more trust within the community at those who are steering it, more trust from external donors, more trust from stakeholders who are affected by EA’s goals, or some other way?
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I’m annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply “Oh, you care for the future of all humans, and even animals? That’s suspicious – we’re definitely going to apply extra scrutiny towards you.” Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,”Are EAs following democratic processes and why does their funding come from very few sources?” is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that “it’s important to have good institutions” is something EA owes to outsiders is what seems weird to me. Doesn’t this framing kind of suggest that EAs couldn’t motivate themselves to try their best if it weren’t for “institutional safeguards.” What a depressing view of humans, that they can only act according to their stated ideals if they’re watched at every step and have to justify themselves to critics!
EAs have discussions about governance issues EA-internally, too. It’s possible (in theory) that EA has as many blindspots as Zoe thinks, but it’s also possible that Zoe is wrong (or maybe it’s something in between). Either way, I don’t think anyone in EA, nor “EA” as a movement, has any obligation to engage in great detail with Zoe’s criticisms if they don’t think that’s useful.* (Not to say that they don’t consider the criticism useful – my impression is that there are EAs on both sides, and that’s fine!)
If a lot of people agree with Zoe’s criticism, that creates more social pressure to answer to her points. That’s probably a decent mechanism to determine what an “appropriate” level of minimally-mandatory engagement should be – though it depends a bit whether the social pressure comes from well-intentioned people who reasonably informed about the issues or whether some kind of “let’s all pile on these stupid EAs” dynamics emerge. (So far, the dynamics seem healthy to me, but if EA keeps getting trashed in the media, then this could change.)
*(I guess if someone’s impression of EA was “group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future,” then it would be reasonable for them to go like, “wtf, if that’s your movement’s plan, I’m concerned!” However, that would be a strawman impression of EA. Most EAs endorse moral views according to which individual preferences matter and “eudaimonia” is basically “everyone gets what they most want.” Besides, even the few hedonist utilitarians [or negative utilitarians] within EA think preferences matter and argue for being nice to others with different views.)
I don’t disagree with this part. I definitely think it’s wise for EAs to engage with critics, especially thoughtful critics, which I consider Zoe to be one of the best examples of, despite disagreeing with probably at least 50% of her specific suggestions.
While I did use the word “immoral,” I was only commenting on the framing Zoe/Carla used in that one particular paragraph I quoted. I definitely wasn’t describing her overall behavior!
In case you want my opinion, I am a bit concerned that her rhetoric is often a bit “sensationalist” in a nuance-lacking way, and this makes EA look bad to journalists in a way I consider uncalled for. But I wouldn’t label that “acting in bad-faith;” far from it!
Yeah, I agree with all of that. Still, in the end, it’s up to EAs themselves to decide which criticisms to engage with at length and where it maybe isn’t so productive.
In the books (or the movies), this part is made easy by having a kind and wise old wizard – who wouldn’t consider going with Gandalf’s advice a defensible decision-procedure!
In reality, “who gets to wield power” is more complicated. But one important point in my original comment was that EA doesn’t even have that much power, and no ring (nor anything analogous to it – that’s a place where the analogy breaks). So, it’s a bit weird to subject EA to as much scrutiny as would be warranted if they were about to enshrine their views into the constitution of a world government. All longtermist EA is really trying to do right now is trying to ensure that people won’t be dead soon so that there’ll be the option to talk governance and so on later on. (BTW, I do expect EAs to write up proposals for visions of AI-aided ideal governance at some point. I think that’s good to have and good to discuss. I don’t see it as the main priority right now because EAs haven’t yet made any massive bids for power in the world. Besides, it’s not like whatever the default would otherwise be has much justification. And you could even argue that EAs have done the most so far out of any group promoting discourse about important issues related to fair governance of the future.)
Thanks for sharing! We have some differing views on this which I will focus on—but I agree with much of what you say and do appreciate your thoughts + engagement here.
It sounds like you are getting the impression that criticism directed at EA indicates that people criticising EA think this is a larger issue than AI capabilities or widespread apathy etc, if they aren’t spending their time lobbying against those larger issues. But there might be other explanations for their focus—any given individual’s sphere of influence, tractability, personal identity, and others can all be factors that contribute here.
“It’s important to have good institutions” is clearly something that “serious EAs” are strongly incentivised to do. But people who have a lot of power and influence and funding also face incentives to maintain a status quo that they benefit from. EA is no different, and people seeking to do good are not exempt from these kinds of incentives. And EAs who are serious about things should acknowledge that they are subject to these incentives, as well as the possibility that one reason outsiders might be speaking up about this is because they think EAs aren’t taking the problem seriously enough. The benefit of the outside critic is NOT that EAs have some special obligation towards them (though, in this case, if your actions directly impact them, then they are a relevant stakeholder that is worth considering), but because they are somewhat removed and may be able to provide some insight into an issue that is harder for you to see when you are deeply surrounded by other EAs and people who are directly mission / value-aligned.
I think this goes too far, I don’t think this is the claim being made. The standard is just “would better systems and institutional safeguards better align EA’s stated ideals and what happens in practice? If so, what would this look like, and how would EA organisations implement these?”. My guess is you probably agree with this though?
Nitpick: while I agree that it would be a strawman, it isn’t the only scenario for outsiders to be concerned. There are also people who disagree with some longtermists vision of the future, there are people who think EA’s general approach is bad, and it could follow that those people will think $$ on EA causes are poorly spent and should be spent in [some different way]. There are also people who think EA is a talent drain away from important issues. Of course, this doesn’t interact with the extent to which EA is “obligated” to respond, especially because many of these takes aren’t great. I agree that there’s no obligation, per se. But the claim is “outsiders are permitted to ASK you to fix your problems”, not that you are obligated to respond (though subsequent sentences RE: “I can demand” or “you should” might be a source of miscommunication).
I guess the way I see it is something like—EA isn’t obligated to respond to any outsider criticism, but if you want to be taken seriously by these outsiders who have these concerns, if you want buy-in from people who you claim to be working with and working for, if you don’t want people at social entrepreneurship symposiums seriously considering questions like “Is the way to do the most good to destroy effective altruism?”, then it could be in your best interest to take good-faith criticisms and concerns seriously, even if the attitude comes across poor, because it likely reflects some barrier in you achieving your goals. But I think there probably isn’t much disagreement between us here.
I think this is an undervalued idea. But I also think that there’s a distinct but closely related idea, which is valuable, which is that for any Group X with Goal Y, it is nearly always instrumentally valuable for Group X to hear about suggestions about how it can better advance Goal Y, especially from those who believe that Goal Y is valuable. Sometimes this will read as (or have the effect of) disincentivizing adopting Goal Y (because it leads to criticism), but in fact it’s often much easier to marginally improve the odds of Goal Y being achieved by attempting to persuade Group X to do better at Y than to persuade Group ~X who believes ~Y. I take Carla Zoe to be doing this good sort of criticism, or at least that’s the most valuable way to read her work.
I would also point out that I think the proposition that ” that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk” is both:
Probably undesirable to implement in practice because any criticism will have some disincentivizing effect.
Probably violated by your comment itself, since I’d guess that any normal person would be disincentivized to some extent by engaging in constructive criticism (above the baseline of apathy or jerkiness) that is likely to be labeled as immoral.
This is just to say that I value the general maxim you’re trying to advance here, but “never” is way too strong. Then it’s just a boring balancing question.
“Never” is too strong, okay. But I disagree with your second point. I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.) I don’t feel like I was discouraging criticism. Basically, my point wasn’t about the act of criticizing at all, it was only about an added expectation that went with it, which I’d paraphrase as “EAs are doing something wrong unless they answer to my concerns point by point.”
Ah, okay. That seems more reasonable. Sorry for misunderstanding.
I agree insofar as status as an intended EA beneficiary does not presumptively provide someone with standing demand answers from EA about risk management. However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
I think the LOTR analogy is inapt. Taking Zoe’s comment here at face value, she is not suggesting that everyone put Project Mount Doom on hold until the Council of Elrond runs some public-opinion surveys. She is suggesting that reform ideas warrant further development and discussion. That’s closer to asking for some time of a mid-level bureaucrat at Rivendell and a package of lembas than diverting Frodo. Yes, it may be necessary to bring Frodo in at some point, but only if preliminary work suggests it would be worthwhile to do so.
I recognize that there could be some scenarios in which the utmost single-mindedness is essential: the Nagzul have been sighted near the Ringbearer. But other EA decisions don’t suggest that funders and leaders are at Alert Condition Nagzul. For example, while I don’t have a clear opinion on the Wytham purchase, it seems to have required a short-term expenditure of time and lock-up of funds for an expected medium-to-long-run payoff.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it’s a leap of logic to go from “because your stated ambition is to do risk analysis for all of us” to “That means that even if I don’t want to wear your brand, I can demand that you answer the questions of [...]” – even if we add the hidden premise “this is about expected harms caused by EA.” Just because EA does “risk analysis for all sentient beings” doesn’t mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it’s far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about “you fail at your aims” rather than “you pose a risk to all of us,” then my initial point still applies, that EA doesn’t have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you “you fail at your aims.”
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed—in a causal sense—to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as “bureaucracy,” which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.