Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartially and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat—they don’t seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I’ve spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it’s being pocketed or corruptly used by these collection orgs.
Given this, it seems like there’s a really big niche in the market to be exploited by an EA-aligned zakat org. My feeling at the moment is that the org should focus on, and emphasise, its ability to be highly accountable and transparent about how it stores and distributes the zakat it collects.
The trick here is finding ways to distribute zakat to eligible recipients in cost-effective ways. Currently, possibly only two of the several dozen ‘most effective’ charities we endorse as a community would be likely zakat-compliant (New Incentives, and Give Directly), and even then, only one or two of GiveDirectly’s programs would qualify.
This is pretty disappointing, because it means that the EA community would probably have to spend quite a lot of money either identifying new highly effective charities which are zakat-compliant, or start new highly-effective zakat complaint orgs from scratch.
I’m not sure how I feel about this as a pathway, given the requirement that zakat donations only go to other people within the religion. On the one hand, it sounds like any charity that is constrained in this way in terms of recipients but had non-Muslim employees/contractors, would have to be subsidised by non-zakat donations (based on the GiveDirectly post linked in another comment). It also means endorsing a rather narrow moral circle, whereas potentially it might be more impactful to expend resources trying expand that circle than to optimise within it.
Otoh, it does cover a whole quarter of humanity, and so potentially a lot of low hanging fruit can be gained without correspondingly slowing moral circle expansion.
I don’t think helping people who feel an obligation to give zakat do so in the most effective way possible would constitute “endorsing” the awarding of strong preference to members of one’s religion as recipients of charity. It merely recognizes that the donor has already made this precommitment, and we want their donation to be as effective as possible given that precommitment.
It took me ~1 minute. I already had a favourite candidate so I put all my points towards that. I was half planning to come back and edit to add backup choices but I’ve seen the interim results now so I’m not going to do that.
3-4 minutes, mostly on playing through various elimination-order scenarios in my head and trying to ensure that my assigned values would still reflect my preferences in at least more likely scenarios.
The percentages I inputted were best guesses based on my qualitative impressions. If I’d been more quantitative about it, then I expect my allocations would have been better—i.e., closer to what I’d endorse on reflection. But I didn’t want to spend long on this, and figured that adding imperfect info to the commons would be better than adding no info.
IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn’t have to spend time learning more or thinking through tradeoffs.
One of the canonical EA books (can’t remember which) suggests that if an individual stops consuming eggs (for example), almost all the time this will have zero impact, but there’s some small probability that on some occasion it will have a significant impact. And that can make it worthwhile.
I found this reasonable at the time, but I’m now inclined to think that it’s a poor generalization where the expected impact still remains negligible in most scenarios. The main influence for my shift is when I think about how decisions are made within organizations, and how power-seeking approaches are vastly superior to voting in most areas of life where the system exceeds a threshold of complexity.
Animal Charity Evaluators estimates that a plant-based diet spares 105 vertebrates per year. So if you’re vegan for 50 years, that comes out to 5,250 animals saved. If you put even 10% credence in the ACE number, where the counterfactual is zero impact, you’d still be helping over 500 animals in expectation.
This position is commonly defended for consequentialist arguments for vegetarianism and veganism; see, e.g., Section 2 here, Section 2 here, and especially Day 2 here. The argument usually goes something like: if you stop buying one person’s worth of eggs, then in expectation, the industry will not produce something like one pound of eggs that they would’ve produced otherwise. Even if you are not the tipping point to cause them to cause production, due to uncertainty you still have positive expected impact. (I’m being a bit vague here, but I recommend reading at least one of the above readings—especially the third one—because they make the argument better than I can.)
In the case of animal product consumption, I’m confused what you mean by “the expected impact still remains negligible in most scenarios”—are you referring to different situations? I agree in principle that if the expected impact is tiny, then we don’t have much reason on consequentialist grounds to avoid the behavior, but do you have a particular situation in mind? Can you give concrete examples of where your shift in views applies/where you think the reasoning doesn’t apply well?
One of those sources (“Compassion, by the Pound”) estimates that reducing consumption by one egg results in an eventual fall in production by 0.91 eggs, i.e., less than a 1:1 effect.
I’m not arguing against the idea that reducing consumption leads to a long-term reduction in production. I’m doubtful that we can meaningfully generalise this kind of reasoning across different specifics as well as distinct contexts without investigating it practically.
For example, there probably exist many types of food products where reducing your consumption only has like a 0.1:1 effect. (It’s also reasonable to consider that there are some cases where reducing consumption could even correspond with increased production.) There are many assumptions in place that might not hold true. Although I’m not interested in an actual discussion about veganism, one example of a strong assumption that might not be true is that the consumption of egg is replaced by other food sources that are less bad to rely on.
I’m thinking that the overall “small chance of large impact by one person” argument probably doesn’t map well to scenarios where voting is involved, one-off or irregular events, sales of digital products, markets where the supply chain changes over time because there’s many ways to use those products, or where excess production can still be useful. When I say “doesn’t map well”, I mean that the effect of one person taking action could be anywhere between 0:1 to 1:1 compared to what happens when the sufficient number of people simultaneously make the change in decision-making required for a significant shift. If we talk about one million people needing to vote differently so that a decision is reversed, the expected impact of my one vote is always going to be less than 100% of one millionth, because it’s not guaranteed that one million people will sway their vote. If there’s only a 10% chance of the one million swayed votes, I’d think my expected impact to come out at far less than even 0.01:1 from a statistical model.
I agree that this style of reasoning depends heavily on the context studied (in particular, the mechanism at play), and that we can’t automatically use numbers from one situation for another. I also agree with what I take to be your main point: In many situations, the impact is less than 1:1 due to feedback loops and so on.
I’m still not sure I understand the specific examples you provide:
Animal products used as food: For commonly-consumed food animal products, I would be surprised if the numbers were much lower than those in the table from Compassion by the Pound (assuming that those numbers are roughly correct). This is because the mechanism used to change levels of production is similar in these cases. (The previous sentence is probably naive, so I’m open to corrections.) However, your point about substitution across goods (e.g., from beef to chicken) is well taken.
Other animal products: Not one of the examples you gave, but one material that’s interested me is cow leather. I’m guessing that (1) much of leather is a byproduct* of beef production and (2) demand for leather is relatively elastic. Both of these suggest that abstaining from buying leather goods has a fairly small impact on farmed animal welfare suffering.**
Voting: I am unsure what you mean here by “1:1”. Let me provide a concrete example, which I take to be the situation you’re talking about. We have an election with n voters and 2 candidates, with the net benefit of the better candidate winning U. If all voters were to vote for the better candidate, then each person’s average impact is U / n. I assume that this is what you mean by the “1″ in “1:1”: if someone has expected counterfactual impact U / n, then their impact is 1:1. If this is what you mean by 1:1, then actually one’s impact can easily be greater than U / n, going against your claim. For example, if your credence on the better candidate winning is exactly 50%, then U / n is a lower bound; see Ord (2023), some of whose references show that in real-world situations, the probability of swaying the election can be much greater than 1 / n.
* Not exactly a byproduct, since sales of leather increases the revenue from raising a cow. ** This is not accounting for less direct impacts on demand, like influencing others around oneself.
This is because the mechanism used to change levels of production is similar in these cases.
I’m unclear on the exact mechanism and suspect that the anecdote of “the manager sees the reduced demand across an extended period and decides to lower their store’s import by the exact observed reduction” is a gross oversimplification of what I would have guessed is a complex system where the manager isn’t perfectly rational, may have long periods without review due to contractual reasons, the supply chain lasting multiple parties all with non-linear relationships. Maybe some food supply chains significantly differ at the grower’s end, or in different countries. My missing knowledge here is why I don’t think I have a good reason to assume generality.
Other animal products
I think your cow leather example highlights the idea that for me threatens simplistic math assumptions. Some resources are multi-purpose, and can be made into different products through different processes and grades of quality depending on the use case. It’s pretty plausible that eggs are either used for human consumption or hatching. Some animal products might be more complicated and be used for human consumption or non-human consumption or products in other industries. It seems reasonable for me to imagine a case where decreasing human consumption results in wasted production which “inspires” someone to redirect that production to another product/market which becomes successful and results in increased non-dietary demand. I predict that this isn’t uncommon and could dilute some of the marginal impact calculations which are true short-term but might not play out long-term. (I’m not saying that reducing consumption isn’t positive expectation, I’m saying that the true variance of the positive could be very high over a long-term period that typically only becomes clear in retrospect.)
Voting
Thanks for that reference from Ord. I stand updated on voting in elections. I have lingering skepticism about a similar scenario that’s mathematically distinct: petition-like scenarios. E.g. if 100k people sign this petition, some organization is obliged to respond. Or if enough students push back on a school decision, the school might reconsider. This is kind of like voting except that the default vote is set. People who don’t know the petition exists have a default vote. I think the model described by Ord might still apply, I just haven’t got my head around this variation yet.
I agree that the simple story of a producer reacting to changing demand directly is oversimplified. I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product’s supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there. I can explain why I think this in more detail if we disagree on this.
Leather example:
Sure, I chose this example to show how one’s impact can be diluted, but I also think that decreasing leather consumption is unusually low-impact. I don’t think the stories for other animal products are as convincing. To take your examples:
Eggs for human consumption are unfertilized, so I’m not sure how they are useful for hatching. Perhaps you are thinking that producers could fertilize the eggs, but that seems expensive and wouldn’t make sense if demand for eggs is decreasing.
Perhaps I am uncreative, but I’m not sure how one would redirect unused animal products in a way that would replace the demand from human consumption. Raising an animal seems pretty expensive, so I’m not sure in what scenario this would be so profitable.
If we are taking into account the sort of “meta” effects of consuming fewer animal products (such as your example of causing people to innovate new ways of using animal products), then I agree that these increase the variance of impact but I suspect that they strongly skew the distribution of impact towards greater rather than lesser impact. Some specific, and straightforward, examples: companies research more alternatives to meat; society has to accommodate more vegans and vegan food ends up more widespread and appealing, making more people interested in the transition; people are influenced by their reducetarian friends to eat less meat.
Voting:
I’ll need to think about it more, but as with two-candidate votes, I think that petitions can often have better than 1:1 impact.
Brilliant, thank you. One of the very long lists of interp work on the forum seemed to have everything as mech interp (or possibly I just don’t recognize alternative key words). Does the EA AI safety community feel particularly strongly about mech interp or is it just my sample size being too small?
pinkfrog (and their associated account) has been banned for 1 month, because they voted multiple times on the same content (with two accounts), including upvoting pinkfrog’s comments with their other account. To be a bit more specific, this happened on one day, and there were 12 cases of double-voting in total (which we’ll remove). This is against our Forum norms on voting and using multiple accounts.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.
Multiple people on the moderation team have conflicts of interest with pinkfrog, so I wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed aren’t aware of pinkfrog’s real identity (they only saw anonymized information).
It seems inconsistent to have this info public for some, and redacted for others. I do think it is good public service to have this information public, but am primarily pushing here for consistency and some more visibility around existing decisions.
Agree. It seems potentially pretty damaging to people’s reputations to make this information public (and attached to their names); that strikes me as a much bigger penalty than the bans. There should, at a minimum, be a consistent standard, and I’m inclined to think that standard should be having a high bar for releasing identifying information.
I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there’s a case to be made when the information is cherry-picked or biased, or there’s no opportunity to hear a fair response. But goodness, if we’ve learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.
Fair point about reputational harms being worse and possibly too punishing in some cases. I think in terms of a proposed standard it might be worth differentiating (if possible) between e.g. careless errors, or momentary lapses in judgement that were quickly rectified and likely caused no harm in expectation, versus a pattern of dishonest voting intended to mislead the EAF audience, and especially if they or an org that they work for stand to gain from it, or the comments in question are directly harmful to another org. In these latter cases the reputational harm may be more justifiable.
For reasoning transparency / precedent development, it might be worthwhile to address two points:
(1) I seem to remember other multivoting suspensions being much longer than 1 month. I had gotten the impression that the de facto starting point for deliberate multiaccount vote manipulation was ~ six months. Was the length here based on mitigating factors, perhaps the relatively low number of violations and that they occurred on a single day? If the usual sanction is ~ six months, I think it would be good to say that here so newer users understand that multivoting is a really big deal.
(2) Here the public notice names the anon account pinkfrog (which has 3 comments + 50 karma), rather than the user’s non-anon account. The last multi account voting suspension I saw named the user’s primary account, which was their real name. Even though the suspension follows the user, which account is publicly named can have a significant effect on public reputation. How does the mod team decide which user to name in the public notice?
pinkfrog: 1 month (12 cases of double voting) LukeDing: 6 months (>200 times) JamesS: indefinite (8 accounts, number not specified) [Redacted]: 2 months (13 double votes, most are “likely accidental”, two “self upvotes”) RichardTK: 6 months (number not specified)
Charles He: 10 years (not quite analogous as these are using alts to circumvent initial bans, included other violations) Torres: 20 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
It’s kind of jarring to read that someone has been banned for “violating a norm”—that word to me implies that they’re informal agreements between the community. Why not call them “rules”?
LukeDing (and their associated alt account) has been banned for six months, due to voting & multiple-account-use violations. We believe that they voted on the same comment/post with two accounts more than two hundred times. This includes several instances of using an alt account to vote on their own comments.
LukeDing appealed the decision; we will reach out to them and ask them if they’d like us to feature a response from them under this comment.
As some of you might realize, some people on the moderation team have conflicts of interest with LukeDing, so we wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed weren’t aware of LukeDing’s identity (they only saw anonymized information).
Is more information about the appellate process available? The guide to forum norms says “We’re working on a formal process for reviewing submissions to this form, to make sure that someone outside of the moderation team will review every submission, and we’ll update this page when we have a process in place.”
The basic questions for me would include: information about who decides appeals, how much deference (if any) the adjudicator will give to the moderators’ initial decision—which probably should vary based on the type of decision at hand, and what kind of contact between the mods and appellate adjudicator(s) is allowed. On the last point, I would prefer as little ex parte contact if possible, and would favor having an independent vetted “advocate for the appellant” looped in if there needs to be contact to which the appellant is not privy.
Admittedly I have a professional bias toward liking process, but I would err on the side of more process than less where accounts are often linked to real-world identities and suspensions are sometimes for conduct that could be seen as dishonest or untrustworthy. I would prefer public disclosure of an action taken in cases like this only after the appellate process is complete for the same reasons, assuming the user timely indicates a desire to appeal the finding of a norm violation.
Finally, I commend keeping the moderators deciding whether a violation occurred blinded as to the user’s identity as a best practice in cases like this, even where there are no COIs. It probably should be revealed prior to determining a sanction, though.
I would prefer public disclosure of an action taken in cases like this only after the appellate process is complete for the same reasons, assuming the user timely indicates a desire to appeal the finding of a norm violation.
It does intuitively seem like an immediate temporary ban, made public only after whatever appeals are allowed have been exhausted, should give the moderation team basically everything they need while being more considerate of anyone whose appeals are ultimately upheld (i.e. innocent, or mitigating circumstances).
Quick update: we’ve banned Defacto, who we have strong reason to believe is another sockpuppet account for Charles He. We are extending Charles’s ban to be indefinite (he and others can appeal if they want to).
Just a quick note to say that we’ve removed a post sharing a Fermi estimate of the chances that the author finds a partner who matches their preferred characteristics and links to a date-me doc.
The Forum is for discussions about improving the world, and a key norm we highlight is “Stay on topic.” This is not the right space for coordinating dating. (Consider exploring LessWrong, ACX threads/classifieds, or EA-adjacent Facebook/Reddit/Discord groups for discussions that are primarily social.)
We’re not taking any other action about the author, although I’ve asked them to stay on topic in the future.
We’re issuing [Edit: identifying information redacted] a two-month ban for using multiple accounts to vote on the same posts and comments, and in one instance for commenting in a thread pretending to be two different users. [Edit: the user had a total of 13 double-votes, most far apart and are likely accidental, two upvotes close together on others’ posts (which they claim are accidental as well), but two cases of deliberate self upvote from alternative accounts]
This is against the Forumnorms around using multiple accounts. Votes are really important for the Forum: they provide feedback to authors and signal to readers what other users found most valuable, so we need to be particularly strict in discouraging this kind of vote manipulation.
A note on timing: the comment mentioned above is 7 months old but went unnoticed at the time, a report for it came in last week and triggered this investigation.
If [Edit: redacted] thinks that this is not right, he can appeal. As a reminder, bans affect the user, not the account.
[Edit: We have retroactively decided to redact the user’s name from this early message, and are currently rethinking our policies on the matter]
Do suspended users get a chance to make a public reply to the mod team’s findings? I don’t think that’s always necessary—e.g., we all see the underlying conduct when public incivility happens—but I think it’s usually warranted when the findings imply underhanded behavior (“pretending”) and the underlying facts aren’t publicly observable. There’s an appeal process, but that doesn’t address the public-reputation interests of the suspended person.
We’ve banned Vee from the Forum for 1 year. Their content seems to be primarily or significantly AI-generated,[1] and it’s not clear that they’re using it to share thoughts they endorse and have carefully engaged with. (This had come up before on one of their posts.) Our current policy on AI-generated content makes it clear that we’ll be stricter when moderating AI-generated content. Vee’s content doesn’t meet the standards of the Forum.
If Vee thinks that this is not right, they can appeal. If they come back, we’ll be checking to make sure that their content follows Forum norms. As a reminder, bans affect the user, not the account.
Different detectors for AI content are giving this content different scores, but we think that this is sufficiently likely true to act on.
It’s hard to be certain that something is AI-generated, and I’m not very satisfied with our processes or policies on this front. At the same time, the increase in the number of bots has made dealing with spam or off-topic/troll contributions harder, and I think that waiting for something closer to certainty will have costs that are too high.
Moderation update: We have indefinitely banned 8 accounts[1] that were used by the same user (JamesS) to downvote some posts and comments from Nonlinear and upvote critical content about Nonlinear. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
Was emerson_fartz an acceptable username in the first place? (It may not have had a post history in which case no one may have noticed its existence before the sockpuppeting detection, but that sounds uncivil toward a living person)
Moderation update: We have banned “Richard TK” for 6 months for using a duplicate account to double-vote on the same posts and comments. We’re also banning another account (Anin, now deactivated), which seems to have been used by that same user or by others to amplify those same votes. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
Moderation update: A new user, Bernd Clemens Huber, recently posted a first post (“All or Nothing: Ethics on Cosmic Scale, Outer Space Treaty, Directed Panspermia, Forwards-Contamination, Technology Assessment, Planetary Protection, (and Fermi’s Paradox)”) that was a bit hard to make sense of. We hadn’t approved the post over the weekend and hadn’t processed it yet, when the Forum team got an angry and aggressive email today from the user in question calling the team “dipshits” (and providing a definition of the word) for waiting throughout the weekend.
If the user disagrees with our characterization of the email, they can email us to give permission for us to share the whole thing.
We have decided that this is not a promising start to the user’s interactions on the Forum, and have banned them indefinitely. Please let us know if you have concerns, and as a reminder, here are the Forum’s norms.
Moderation update: I’m indefinitely banning JasMaguire for an extremely racist comment that has since been deleted. We’ll likely revisit and update our forum norms to explicitly discourage this sort of behavior.
We have strong reason to believe that Charles He used multiple new accounts to violate his earlier 6-month-long ban. We feel that this means that we cannot trust Charles He to follow this forum’s norms, and are banning him from the Forum for the next 10 years (until December 20, 2032).
We have already issued temporary suspensions to several suspected duplicate accounts, including one which violated norms about rudeness and was flagged to us by multiple users. We will be extending the bans for each of these accounts to mirror Charles’s 10-year ban, but are giving the users an opportunity to message us if we have made any of those temporary suspensions in error (and have already reached out to them). While we aren’t >99% certain about any single account, we’re around 99% that at least one of these is Charles He.
I find this reflects worse on the mod team than Charles. This is nowhere near the first time I’ve felt this way.
Fundamentally, it seems the mod team heavily prioritizes civility and following shallow norms above enabling important discourse. The post on forum norms says a picture of geese all flying in formation and in one direction is the desirable state of the forum; I disagree that this is desirable. Healthy conflict is necessary to sustain a healthy community. Conflict sometimes entails rudeness. Some rudeness here and there is not a big deal and does not need to be stamped out entirely. This also applies to the people who get banned for criticizing EA rudely, even when they’re criticizing EA for its role in one of the great frauds of modern history. Banning EA critics for minor reasons is a short-sighted move at best.
Banning Charles for 10 years (!!) for the relatively small crime of evading a previous ban is a seriously flawed idea. Some of his past actions like doxxing someone (without any malice I believe) are problematic and need to be addressed, but do not deserve a 10 year ban. Some of his past comments, especially farther in the past, have been frustrating and net-negative to me, but these negative actions are not unrelated to some of his positive traits, like his willingness to step out of EA norms and communicate clearly rather than like an EA bot. The variance of his comments has steadily decreased over time. Some of his comments are even moderator-like, such as when he warned EA forum users not to downvote a WSJ journalist who wasn’t breaking any rules. I note that the mod team did not step in there to encourage forum norms.
I also find it very troubling that the mod team has consistent and strong biases in how it enforces its norms and rules, such as not taking any meaningful action against an EA in-group member for repeated and harmful violations of norms but banning an EA critic for 20 years for probably relatively minor and harmless violations. I don’t believe Charles would have received a similar ban if he was an employee of a brand name EA org or was in the right social circles.
Finally, as Charles notes, there should be an appeals process for bans.
the relatively small crime of evading a previous ban
I don’t think repeatedly evading moderator bans is a “relatively small crime”. If Forum moderation is to mean anything at all, it has to be consistently enforced, and if someone just decides that moderation doesn’t apply to them, they shouldn’t be allowed to post or comment on the Forum.
Charles only got to his 6 month ban via a series of escalating minor bans, most of which I agreed with. I think he got a lot of slack in his behaviour because he sometimes provided significant value, but sometimes (with insufficient infrequency) behaved in ways that were seriously out of kilter with the goal of a healthy Forum.
I personally think the 10-year thing is kind of silly and he should just have been banned indefinitely at this point, then maybe have the ban reviewed in a little while. But it’s clear he’s been systematically violating Forum policies in a way that requires serious action.
The post on forum norms says a picture of geese all flying in formation and in one direction is the desirable state of the forum; I disagree that this is desirable.
It makes a lot of difference to me that Charles’ behavior was consistently getting better. If someone consistently flouts norms without any improvement, at some point they should be indefinitely banned. This is not the case with Charles. He started off with really high variance and at this point has reached a pretty tolerable amount. He has clearly worked on his actions. The comments he posted while flouting the mods’ authority generally contributed to the conversation. There are other people who have done worse things without action from the mod team. Giving him a 10 year ban without appeal for this feels more motivated by another instance of the mod team asserting their authority and deciding not to deal with messiness someone is causing than a principled decision.
I think this is probably true. I still think that systematically evading a Forum ban is worse behaviour (by which I mean, more lengthy-ban-worthy) than any of his previous transgressions.
There are other people who have done worse things without action from the mod team.
I am not personally aware of any, and am sceptical of this claim. Open to being convinced, though.
Indefinite suspension with leave to seek reinstatement after a stated suitable period would have been far preferable to a 10-year ban. A tenner isn’t necessary to vindicate the moderators’ authority, and the relevant conduct doesn’t give the impression of someone for whom the passage of ten years’ time is necessary before there is a reasonable probability that would they have become a suitable participant during the suspension.
Totally unrelated to the core of the matter, but do you intend to turn this into a frontpage post? I’m a bit inclined to say it’d be better for transparency, and to inform others about the bans, and deter potential violators.… but I’m not sure, maybe you have a reason for preferring the shortform (or you’ll publish periodical updates on the frontpage
In other forums and situations, there is a grace period where a user can comment after receiving a very long ban. I think this is a good feature that has several properties with long term value.
We have strong reason to believe that Charles He used multiple new accounts to violate his earlier 6-month-long ban.
These accounts are some of these accounts I created (but not all[1]):
Here are some highlights of some of the comments made by the accounts, within about a 30 day period.
Pointing out the hollowness of SBF’s business, which then produced a follow up comment, which was widely cited outside the forum, and may have helped generate a media narrative about SBF.
My alternate accounts were created successively, as they were successively banned. This was the only reason for subterfuge, which I view as distasteful.
I have information on the methods that the CEA team used to track my accounts (behavioral telemetry, my residential IP). This is not difficult to defeat. Not only did I not evade these methods, but I gave information about my identity several times (resulting in a ban each time). These choices, based on my distaste, is why the CEA team is “99% certain” (and at least, in a mechanical sense) why I have this 10 year ban.
We feel that this means that we cannot trust Charles He to follow this forum’s norms, and are banning him from the Forum for the next 10 years (until December 20, 2032).
I believe I am able to defend each of the actions on my previous bans individually (but never have before this). More importantly, I always viewed my behavior as a protest.
At this point, additional discussions are occurring by CEA[1], such as considering my ban from EAG and other EA events. By this, I’ll be joining blacklists of predators and deceivers.
As shown above, my use of alternate accounts did not promote or benefit myself in any way (even setting aside expected moderator action). Others in EA have used sock puppets to try to benefit their orgs, and gone on to be very successful.
Note that the moderator who executed the ban above, is not necessarily involved in any way in further action or policy mentioned in my comments. Four different CEA staff members have reached out or communicated to me in the last 30 days.
We have strong reason to believe that Torres (philosophytorres) used a second account to violate their earlier ban. We feel that this means that we cannot trust Torres to follow this forum’s norms, and are banning them for the next 20 years (until 1 October 2042).
Around a month ago, a post about the authorship of Democratising Risk got published. This post got taken down by its author. Before this happened, the moderation team had been deciding what to do with some aspects of the post (and the resulting discussion) that had violated Forum norms. We were pretty confident that we’d end up banning two users for at least a month, so we banned them temporarily while we sorted some things out.
One of these users was Throwaway151. We banned them for posting something a bit misleading (the post seemed to overstate its conclusions based on the little evidence it had, and wasn’t updated very quickly based on clear counter-evidence), and being uncivil in the comments. Their ban has passed, now. As a reminder, bans affect the user, not the account, so any other accounts Throwaway151 operated were also affected. The other user was philosophytorres — see the relevant update.
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.
For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll select donating, e.g., $1000 to AMF, and 20 friends or relatives you like; the tool will write 20 draft messages (you can select different templates the tool will suggest you… there’s probably someone already doing this with ChatGPT), one for each of them, including a card certifying that you donated $50 to AMF in honor of their birthday, and send the message on the corresponding date (the tool could let you revise it one day before it). There should be some options to customize messages and charities (I think it might be important that you choose a charity that the other person would identify with a little bit—maybe Every.org would be more interested in it than GWWC). So you’ll save a lot of time writing nice birthday messages for those you like. And, if you only select effective charities, you could deduce that amount from your pledge.
A couple of weeks ago I blocked all mentions of “Effective Altruism”, “AI Safety”, “OpenAI”, etc from my twitter feed. Since then I’ve noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!
This post recaps a survey about EA ‘meta’ topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year’s Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy’s Global Catastrophic Risk Capacity Building team. (The event has previously gone by the name ‘Leaders Forum.’)
This post received less attention than I thought it would, so I’m bumping it here to make it a bit more well-known that this survey summary exists. All feedback is welcome!
I agree. Of all of CEA’s outputs this year, I think this could be the most useful for the community and I think it’s worth bumping. It’s our fault that it didn’t get enough traction; it came out just before EAG and we didn’t share it elsewhere.
(As someone who filled out the survey, I thought the framing of the questions was pretty off, and I felt like that jeopardized a lot of the value of the questions. I am not sure how much better you can do, I think a survey like this is inherently hard, but I at least don’t feel like the survey results would help someone understand what I think much better)
Thanks, Oli. Yes, I don’t think we nailed it with the questions and as you say, that’s always hard to do. Appreciate you adding this context for readers.
There is still plenty of time to vote in the Donation Election. The group donation pot currently stands at around $30,000. You can nudge that towards the projects you think are most worthwhile (plus, the voting system is fun and might teach you something about your preferences).
Also- you should donate to the Donation Election fund if: a) You want to encourage thinking about effective donations on the Forum. b) You want to commit to donating in line with the Forum’s preferences. c) You’d like me to draw you one of these bad animals (or earn one of our other rewards):
NB: I can also draw these animals holding objects of your choice. Or wearing clothes. Anything is possible.
(not well thought-out musings. I’ve only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn’t want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven’t we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don’t see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
We claim that a superintelligent AI is going to be a reality soon (maybe between 5 years and 80 years from now), and in general is a benchmark that any civilization would reach eventually. But if superintelligent AI is a thing that civilizations tend to make, why aren’t we seeing any indications of that in the broader universe? If some extraterrestrial civilization made an aligned AI, wouldn’t we see the results of that in a variety of ways? If some extraterrestrial civilization made an unaligned AI, wouldn’t we see the results of that in a variety of ways?
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe’s agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of “civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology” is consistent with reality of “we don’t observe any signs of extraterrestrial intelligence.”
A couple months ago I remarked that Sam Bankman-Fried’s trial was scheduled to start in October, and people should prepare for EA to be in the headlines. It turned out that his trial did not actually generate much press for EA, but a month later EA is again making news as a result of recent Open AI board decisions.
A couple quick points:
It is often the case that people’s behavior is much more reasonable than what is presented in the media. It is also sometimes the case that the reality is even stupider than what is presented. We currently don’t know what actually happened, and should hold multiple hypotheses simultaneously.[1]
It’s very hard to predict the outcome of media stories. Here are a few takes I’ve heard; we should consider that any of these could become the dominant narrative.
Vinod Khosla (The Information): “OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence”
John Thornhill (Financial Times): One entrepreneur who is close to OpenAI says the board was “incredibly principled and brave” to confront Altman, even if it failed to explain its actions in public. “The board is rightly being attacked for incompetence,” the entrepreneur told me. “But if the new board is composed of normal tech people, then I doubt they’ll take safety issues seriously.”
The Economist: “The chief lesson is the folly of policing technologies using corporate structures … Fortunately for humanity, there are bodies that have a much more convincing claim to represent its interests: elected governments”
The previous point notwithstanding, people’s attention spans are extremely short, and the median outcome of a news story is ~nothing. I’ve commented before that FTX’s collapse had little effect on the average person’s perception of EA, and we might expect a similar thing to happen here.[2]
Animal welfare has historically been unique amongst EA causes in having a dedicated lobby who is fighting against it. While we don’t yet have a HumaneWatch for AI Safety, we should be aware that people have strong interests in how AI develops, and this means that stories about AI will be treated differently from those about, say, malaria.
It can be frustrating to feel that a group you are part of is being judged by the actions of a couple people you’ve never met nor have any strong feelings about. The flipside of this though is that we get to celebrate the victories of people we’ve never met. Here are a few things posted in the last week that I thought were cool:
The Against Malaria Foundation is in the middle of a nine-month bed net distribution which is expected to prevent 20 million cases of malaria, and about 40,000 deaths. (Rob Mather)
The Shrimp Welfare Project signed an agreement to prevent 125 million shrimps per year from having their eyes cut off and other painful farming practices. (Ula Zarosa)
The Belgian Senate voted to add animal welfare to their Constitution. (Bob Jacobs)
Scott Alexander’s recent post also has a nice summary of victories.
Note that the data collected here does not exclude the possibility that perception of EA was affected in some subcommunities, and it might be the case that some subcommunities (e.g. OpenAI staff) do have a changed opinion, even if the average person’s opinion is unchanged
I’ve commented before that FTX’s collapse had little effect on the average person’s perception of EA
Just for the record, I think the evidence you cited there was shoddy, and I think we are seeing continued references to FTX in basically all coverage of the OpenAI situation, showing that it did clearly have a lasting effect on the perception of EA.
Reputation is lazily-evaluated. Yes, if you ask a random person on the street what they think of you, they won’t know, but when your decisions start influencing them, they will start getting informed, and we are seeing really very clear evidence that when people start getting informed, FTX is heavily influencing their opinion.
So I think there is a real jump of notoriety once the journalistic class knows who you are. And they now know who we are. “EA, the social movement involved in the FTX and OpenAI crises” is not a good epithet.
There are a lot of recent edits on that article by a single editor, apparently a former NY Times reporter (the edit log is public). From the edit summaries, those edits look rather unfriendly, and the article as a whole feels negatively slanted to me. So I’m not sure how much weight I’d give that article specifically.
Sure, here are the top hits for “Effective Altruism OpenAI” (I did no cherry-picking, this was the first search term that I came up with, and I am just going top to bottom). Each one mentions FTX in a way that pretty clearly matters for the overall article:
“AI safety was embraced as an important cause by big-name Silicon Valley figures who believe in effective altruism, including Peter Thiel, Elon Musk and Sam Bankman-Fried, the founder of crypto exchange FTX, who was convicted in early November of a massive fraud.”
Top comment: ” I only learned about EA during the FTX debacle. And was unaware until recently of its focus on AI. Since been reading and catching up …”
“Coming just weeks after effective altruism’s most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences.”
“EA is currently being scrutinized due to its association with Sam Bankman-Fried’s crypto scandal, but less has been written about how the ideology is now driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of “AI safety.”
“The first was caused by the downfall of convicted crypto fraudster Sam Bankman-Fried, who was once among the leading figures of EA, an ideology that emerged in the elite corridors of Silicon Valley and Oxford University in the 2010s offering an alternative, utilitarian-infused approach to charitable giving.”
Ah yeah sorry, the claim of the post you criticized was not that FTX isn’t mentioned in the press, but rather that those mentions don’t seem to actually have impacted sentiment very much.
I thought when you said “FTX is heavily influencing their opinion” you were referring to changes in sentiment, but possibly I misunderstood you – if you just mean “journalists mention it a lot” then I agree.
You are also welcome to check Twitter mentions or do other analysis of people talking publicly about EA. I don’t think this is a “journalist only” thing. I will take bets you will see a similar pattern.
I actually did that earlier, then realized I should clarify what you were trying to claim. I will copy the results in below, but even though they support the view that FTX was not a huge deal I want to disclaim that this methodology doesn’t seem like it actually gets at the important thing.
But anyway, my original comment text:
As a convenience sample I searched twitter for “effective altruism”. The first reference to FTX doesn’t come until tweet 36, which is a link to this. Honestly it seems mostly like a standard anti-utilitarianism complaint; it feels like FTX isn’t actually the crux.
In contrast, I see 3 e/acc-type criticisms before that, two “I like EA but this AI stuff is too weird” things (including one retweeted by Yann LeCun??), two “EA is tech-bro/not diverse” complaints and one thing about Whytham Abbey.
I just tried to reproduce the Twitter datapoint. Here is the first tweet when I sort by most recent:
Most tweets are negative, mostly referring to the OpenAI thing. Among the top 10 I see three references to FTX. This continues to be quite remarkable, especially given that it’s been more than a year, and these tweets are quite short.
I don’t know what search you did to find a different pattern. Maybe it was just random chance that I got many more than you did.
Top was mostly showing me tweets from people that I follow, so my sense is it was filtered in a personalized way. I am not fully sure how it works, but it didn’t seem the right type of filter.
Yeah, makes sense. Although I just tried doing the “latest” sort and went through the top 40 tweets without seeing a reference to FTX/SBF.
My guess is that this filter just (unsurprisingly) shows you whatever random thing people are talking about on twitter at the moment, and it seems like the random EA-related thing of today is this, which doesn’t mention FTX.
Probably you need some longitudinal data to have this be useful.
The previous point notwithstanding, people’s attention spans are extremely short, and the median outcome of a news story is ~nothing. I’ve commented before that FTX’s collapse had little effect on the average person’s perception of EA, and we might expect a similar thing to happen here.
I think this is an oversimplification. This effect is largely caused by competing messages; the modern internet optimizes information for memetic fitness e.g. by maximizing emotional intensity or persuasive effect, and people have so much routine exposure to stuff that leads their minds around in various directions that they get wary (or see having strong reactions to anything at all as immature, since a large portion of outcries on the internet are disproportionately from teenagers). This is the main reason why people take things with a grain of salt.
However, overton windows can still undergo big and lasting shifts (this process could also be engineered deliberately long before generative AI emerged, e.g. via clown attacks which exploit social status instincts to consistently hijack any person’s impressions of any targeted concept). The 80,000 hours podcast with Cass Sunstein covered how Overton windows are dominated by vague impressions of what ideas are acceptable or unacceptable to talk about (note: this podcast was from 2019). This dynamic could plausibly strangle EA’s access to fresh talent, and AI safety’s access to mission-critical policy influence, for several years (which would be far too long).
It can be frustrating to feel that a group you are part of is being judged by the actions of a couple people you’ve never met nor have any strong feelings about.
On the flip side, johnswentworth actually had a pretty good take on this; that the human brain is instinctively predisposed to over-focus on the risk of their in-group becoming unpopular among everyone else:
First, [AI safety being condemned by the public] sure does sound like the sort of thing which the human brain presents to us as a far larger, more important fact than it actually is. Ingroup losing status? Few things are more prone to distorted perception than that.
Thanks for the helpful comment – I had not seen John’s dialogue and I think he is making a valid point.
Fair point that the lack of impact might not be due to attention span but instead things like having competing messages.
In case you missed it: Angelina Li compiled some growth metrics about EA here; they seem to indicate that FTX’s collapse did not “strangle” EA (though it probably wasn’t good).
This was an identical twin study in which one twin went vegan for eight weeks, and the other didn’t. Nice results on some cardiometabolic lab values (e.g., LDL-C) even though the non-vegan twin was also upping their game nutritionally. I don’t think the fact that vegan diets generally improve cardiometabolic health is exactly fresh news, but I find the study design to be unusually legible for nutritional research.
The following table is from Scott Alexander’s post, which you should check out for the sources and (many, many) caveats.
This table can’t tell you what your ethical duties are. I’m concerned it will make some people feel like whatever they do is just a drop in the bucket—all you have to do is spend 11,000 hours without air conditioning, and you’ll have saved the same amount of carbon an F-35 burns on one airstrike! But I think the most important thing it could convince you of is that if you were previously planning on letting yourself be miserable to save carbon, you should buy carbon offsets instead. Instead of boiling yourself alive all summer, spend between $0.04 and $2.50 an hour to offset your air conditioning use.
the reason for starting OpenAI was to create a counterweight to Google and DeepMind, which at the time had two-thirds of all AI talent and basically infinite money and compute. And there was no counterweight. It was a unipolar world. And Larry Page and I used to be very close friends, and I would stay at his house, and I would talk to Larry into the late hours of the night about AI safety. And it became apparent to me that Larry [Page] did not care about AI safety. I think perhaps the thing that gave it away was when he called me a speciest for being pro-humanity, as in a racist, but for species. So I’m like, “Wait a second, what side are you on, Larry?” And then I’m like, okay, listen, this guy’s calling me a speciest. He doesn’t care about AI safety. We’ve got to have some counterpoint here because this seems like we could be, this is no good.
I’m posting here because I remember reading a claim that Elon started OpenAI after getting bad vibes from Demis Hassabis. But he claims that his actual motivation was that Larry Page is an extinctionist. That seems like a better reason.
By the time Musk (and Altman et al) was starting OA, it was in response to Page buying Hassabis. So there is no real contradiction here between being spurred by Page’s attitude and treating Hassabis as the specific enemy. It’s not like Page was personally overseeing DeepMind (or Google Brain) research projects, and Page quasi-retired about a year after the DM purchase anyway (and about half a year before OA officially became a thing).
“Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025).”
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn’t find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI “proposing” the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.
I’ve talked to some people who are involved with OpenAI secondary markets, and they’ve broadly corroborated this.
One source told me that after a specific year (didn’t say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartially and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
EZ#1
The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat—they don’t seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I’ve spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it’s being pocketed or corruptly used by these collection orgs.
Given this, it seems like there’s a really big niche in the market to be exploited by an EA-aligned zakat org. My feeling at the moment is that the org should focus on, and emphasise, its ability to be highly accountable and transparent about how it stores and distributes the zakat it collects.
The trick here is finding ways to distribute zakat to eligible recipients in cost-effective ways. Currently, possibly only two of the several dozen ‘most effective’ charities we endorse as a community would be likely zakat-compliant (New Incentives, and Give Directly), and even then, only one or two of GiveDirectly’s programs would qualify.
This is pretty disappointing, because it means that the EA community would probably have to spend quite a lot of money either identifying new highly effective charities which are zakat-compliant, or start new highly-effective zakat complaint orgs from scratch.
I’m not sure how I feel about this as a pathway, given the requirement that zakat donations only go to other people within the religion. On the one hand, it sounds like any charity that is constrained in this way in terms of recipients but had non-Muslim employees/contractors, would have to be subsidised by non-zakat donations (based on the GiveDirectly post linked in another comment). It also means endorsing a rather narrow moral circle, whereas potentially it might be more impactful to expend resources trying expand that circle than to optimise within it.
Otoh, it does cover a whole quarter of humanity, and so potentially a lot of low hanging fruit can be gained without correspondingly slowing moral circle expansion.
I don’t think helping people who feel an obligation to give zakat do so in the most effective way possible would constitute “endorsing” the awarding of strong preference to members of one’s religion as recipients of charity. It merely recognizes that the donor has already made this precommitment, and we want their donation to be as effective as possible given that precommitment.
I’d love to know more about what the people you’ve spoken to have said—e.g. what kinds of accountability or transparency are they looking for?
What are the criteria for zakat compliance?
Some previous discussion here.
Not sure if you know, but GiveDirectly did have a zakat fund last year https://fundraisers.givedirectly.org/campaigns/yemenzakat
Yep, thanks !
If you voted in the Donation Election, how long did it take you? (What did you spend the most time on?)
I’d be really grateful for quick notes. (You can also private message me if you prefer.)
It took me ~1 minute. I already had a favourite candidate so I put all my points towards that. I was half planning to come back and edit to add backup choices but I’ve seen the interim results now so I’m not going to do that.
3-4 minutes, mostly on playing through various elimination-order scenarios in my head and trying to ensure that my assigned values would still reflect my preferences in at least more likely scenarios.
took me ~5min
It took me just under 5 minutes.
The percentages I inputted were best guesses based on my qualitative impressions. If I’d been more quantitative about it, then I expect my allocations would have been better—i.e., closer to what I’d endorse on reflection. But I didn’t want to spend long on this, and figured that adding imperfect info to the commons would be better than adding no info.
IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn’t have to spend time learning more or thinking through tradeoffs.
Capacity Market 2023: Phase 2 proposals and 10 year review—GOV.UK (www.gov.uk)
One of the canonical EA books (can’t remember which) suggests that if an individual stops consuming eggs (for example), almost all the time this will have zero impact, but there’s some small probability that on some occasion it will have a significant impact. And that can make it worthwhile.
I found this reasonable at the time, but I’m now inclined to think that it’s a poor generalization where the expected impact still remains negligible in most scenarios. The main influence for my shift is when I think about how decisions are made within organizations, and how power-seeking approaches are vastly superior to voting in most areas of life where the system exceeds a threshold of complexity.
Anyone care to propose updates on this topic?
Animal Charity Evaluators estimates that a plant-based diet spares 105 vertebrates per year. So if you’re vegan for 50 years, that comes out to 5,250 animals saved. If you put even 10% credence in the ACE number, where the counterfactual is zero impact, you’d still be helping over 500 animals in expectation.
This position is commonly defended for consequentialist arguments for vegetarianism and veganism; see, e.g., Section 2 here, Section 2 here, and especially Day 2 here. The argument usually goes something like: if you stop buying one person’s worth of eggs, then in expectation, the industry will not produce something like one pound of eggs that they would’ve produced otherwise. Even if you are not the tipping point to cause them to cause production, due to uncertainty you still have positive expected impact. (I’m being a bit vague here, but I recommend reading at least one of the above readings—especially the third one—because they make the argument better than I can.)
In the case of animal product consumption, I’m confused what you mean by “the expected impact still remains negligible in most scenarios”—are you referring to different situations? I agree in principle that if the expected impact is tiny, then we don’t have much reason on consequentialist grounds to avoid the behavior, but do you have a particular situation in mind? Can you give concrete examples of where your shift in views applies/where you think the reasoning doesn’t apply well?
One of those sources (“Compassion, by the Pound”) estimates that reducing consumption by one egg results in an eventual fall in production by 0.91 eggs, i.e., less than a 1:1 effect.
I’m not arguing against the idea that reducing consumption leads to a long-term reduction in production. I’m doubtful that we can meaningfully generalise this kind of reasoning across different specifics as well as distinct contexts without investigating it practically.
For example, there probably exist many types of food products where reducing your consumption only has like a 0.1:1 effect. (It’s also reasonable to consider that there are some cases where reducing consumption could even correspond with increased production.) There are many assumptions in place that might not hold true. Although I’m not interested in an actual discussion about veganism, one example of a strong assumption that might not be true is that the consumption of egg is replaced by other food sources that are less bad to rely on.
I’m thinking that the overall “small chance of large impact by one person” argument probably doesn’t map well to scenarios where voting is involved, one-off or irregular events, sales of digital products, markets where the supply chain changes over time because there’s many ways to use those products, or where excess production can still be useful. When I say “doesn’t map well”, I mean that the effect of one person taking action could be anywhere between 0:1 to 1:1 compared to what happens when the sufficient number of people simultaneously make the change in decision-making required for a significant shift. If we talk about one million people needing to vote differently so that a decision is reversed, the expected impact of my one vote is always going to be less than 100% of one millionth, because it’s not guaranteed that one million people will sway their vote. If there’s only a 10% chance of the one million swayed votes, I’d think my expected impact to come out at far less than even 0.01:1 from a statistical model.
Thanks, this makes things much clearer to me.
I agree that this style of reasoning depends heavily on the context studied (in particular, the mechanism at play), and that we can’t automatically use numbers from one situation for another. I also agree with what I take to be your main point: In many situations, the impact is less than 1:1 due to feedback loops and so on.
I’m still not sure I understand the specific examples you provide:
Animal products used as food: For commonly-consumed food animal products, I would be surprised if the numbers were much lower than those in the table from Compassion by the Pound (assuming that those numbers are roughly correct). This is because the mechanism used to change levels of production is similar in these cases. (The previous sentence is probably naive, so I’m open to corrections.) However, your point about substitution across goods (e.g., from beef to chicken) is well taken.
Other animal products: Not one of the examples you gave, but one material that’s interested me is cow leather. I’m guessing that (1) much of leather is a byproduct* of beef production and (2) demand for leather is relatively elastic. Both of these suggest that abstaining from buying leather goods has a fairly small impact on farmed animal welfare suffering.**
Voting: I am unsure what you mean here by “1:1”. Let me provide a concrete example, which I take to be the situation you’re talking about. We have an election with n voters and 2 candidates, with the net benefit of the better candidate winning U. If all voters were to vote for the better candidate, then each person’s average impact is U / n. I assume that this is what you mean by the “1″ in “1:1”: if someone has expected counterfactual impact U / n, then their impact is 1:1. If this is what you mean by 1:1, then actually one’s impact can easily be greater than U / n, going against your claim. For example, if your credence on the better candidate winning is exactly 50%, then U / n is a lower bound; see Ord (2023), some of whose references show that in real-world situations, the probability of swaying the election can be much greater than 1 / n.
* Not exactly a byproduct, since sales of leather increases the revenue from raising a cow.
** This is not accounting for less direct impacts on demand, like influencing others around oneself.
I’m unclear on the exact mechanism and suspect that the anecdote of “the manager sees the reduced demand across an extended period and decides to lower their store’s import by the exact observed reduction” is a gross oversimplification of what I would have guessed is a complex system where the manager isn’t perfectly rational, may have long periods without review due to contractual reasons, the supply chain lasting multiple parties all with non-linear relationships. Maybe some food supply chains significantly differ at the grower’s end, or in different countries. My missing knowledge here is why I don’t think I have a good reason to assume generality.
Other animal products
I think your cow leather example highlights the idea that for me threatens simplistic math assumptions. Some resources are multi-purpose, and can be made into different products through different processes and grades of quality depending on the use case. It’s pretty plausible that eggs are either used for human consumption or hatching. Some animal products might be more complicated and be used for human consumption or non-human consumption or products in other industries. It seems reasonable for me to imagine a case where decreasing human consumption results in wasted production which “inspires” someone to redirect that production to another product/market which becomes successful and results in increased non-dietary demand. I predict that this isn’t uncommon and could dilute some of the marginal impact calculations which are true short-term but might not play out long-term. (I’m not saying that reducing consumption isn’t positive expectation, I’m saying that the true variance of the positive could be very high over a long-term period that typically only becomes clear in retrospect.)
Voting
Thanks for that reference from Ord. I stand updated on voting in elections. I have lingering skepticism about a similar scenario that’s mathematically distinct: petition-like scenarios. E.g. if 100k people sign this petition, some organization is obliged to respond. Or if enough students push back on a school decision, the school might reconsider. This is kind of like voting except that the default vote is set. People who don’t know the petition exists have a default vote. I think the model described by Ord might still apply, I just haven’t got my head around this variation yet.
I agree that the simple story of a producer reacting to changing demand directly is oversimplified. I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product’s supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there. I can explain why I think this in more detail if we disagree on this.
Leather example:
Sure, I chose this example to show how one’s impact can be diluted, but I also think that decreasing leather consumption is unusually low-impact. I don’t think the stories for other animal products are as convincing. To take your examples:
Eggs for human consumption are unfertilized, so I’m not sure how they are useful for hatching. Perhaps you are thinking that producers could fertilize the eggs, but that seems expensive and wouldn’t make sense if demand for eggs is decreasing.
Perhaps I am uncreative, but I’m not sure how one would redirect unused animal products in a way that would replace the demand from human consumption. Raising an animal seems pretty expensive, so I’m not sure in what scenario this would be so profitable.
If we are taking into account the sort of “meta” effects of consuming fewer animal products (such as your example of causing people to innovate new ways of using animal products), then I agree that these increase the variance of impact but I suspect that they strongly skew the distribution of impact towards greater rather than lesser impact. Some specific, and straightforward, examples: companies research more alternatives to meat; society has to accommodate more vegans and vegan food ends up more widespread and appealing, making more people interested in the transition; people are influenced by their reducetarian friends to eat less meat.
Voting:
I’ll need to think about it more, but as with two-candidate votes, I think that petitions can often have better than 1:1 impact.
Does anyone have a resource that maps out different types/subtypes of AI interpretability work?
E.g. mechanistic interpretability and concept-based interpretability, what other types are there and how are they categorised?
Late to the party here but I’d check out Räuker et al. (2023), which provides one taxonomy of AI interpretability work.
Brilliant, thank you. One of the very long lists of interp work on the forum seemed to have everything as mech interp (or possibly I just don’t recognize alternative key words). Does the EA AI safety community feel particularly strongly about mech interp or is it just my sample size being too small?
Not an expert, but I think your impression is correct. See this post, for example (I recommend the whole sequence).
Not a direct answer, but you might find the Interpretability (ML & AI) tag on LW relevant. That’s where I found Neel Nanda’s longlist of interpretability theories of impact (published Mar-22 so it may be quite outdated), and Charbel-Raphaël’s Against Almost Every Theory of Impact of Interpretability responding to it (published Aug-23, so much more current).
Moderation updates
pinkfrog (and their associated account) has been banned for 1 month, because they voted multiple times on the same content (with two accounts), including upvoting pinkfrog’s comments with their other account. To be a bit more specific, this happened on one day, and there were 12 cases of double-voting in total (which we’ll remove). This is against our Forum norms on voting and using multiple accounts.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.
Multiple people on the moderation team have conflicts of interest with pinkfrog, so I wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed aren’t aware of pinkfrog’s real identity (they only saw anonymized information).
Have the moderators come to a view on identifying information? is pinkfrog the account with higher karma or more forum activity?
In other cases the identity has been revealed to various degrees:
LukeDing
JamesS
Richard TK (noting that an alt account in this case, Anin, was also named)
[Redacted]
Charles He
philosophytorres (but identified as “Torres” in the moderator post)
It seems inconsistent to have this info public for some, and redacted for others. I do think it is good public service to have this information public, but am primarily pushing here for consistency and some more visibility around existing decisions.
Agree. It seems potentially pretty damaging to people’s reputations to make this information public (and attached to their names); that strikes me as a much bigger penalty than the bans. There should, at a minimum, be a consistent standard, and I’m inclined to think that standard should be having a high bar for releasing identifying information.
I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there’s a case to be made when the information is cherry-picked or biased, or there’s no opportunity to hear a fair response. But goodness, if we’ve learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.
Fair point about reputational harms being worse and possibly too punishing in some cases. I think in terms of a proposed standard it might be worth differentiating (if possible) between e.g. careless errors, or momentary lapses in judgement that were quickly rectified and likely caused no harm in expectation, versus a pattern of dishonest voting intended to mislead the EAF audience, and especially if they or an org that they work for stand to gain from it, or the comments in question are directly harmful to another org. In these latter cases the reputational harm may be more justifiable.
For reasoning transparency / precedent development, it might be worthwhile to address two points:
(1) I seem to remember other multivoting suspensions being much longer than 1 month. I had gotten the impression that the de facto starting point for deliberate multiaccount vote manipulation was ~ six months. Was the length here based on mitigating factors, perhaps the relatively low number of violations and that they occurred on a single day? If the usual sanction is ~ six months, I think it would be good to say that here so newer users understand that multivoting is a really big deal.
(2) Here the public notice names the anon account pinkfrog (which has 3 comments + 50 karma), rather than the user’s non-anon account. The last multi account voting suspension I saw named the user’s primary account, which was their real name. Even though the suspension follows the user, which account is publicly named can have a significant effect on public reputation. How does the mod team decide which user to name in the public notice?
pinkfrog: 1 month (12 cases of double voting)
LukeDing: 6 months (>200 times)
JamesS: indefinite (8 accounts, number not specified)
[Redacted]: 2 months (13 double votes, most are “likely accidental”, two “self upvotes”)
RichardTK: 6 months (number not specified)
Charles He: 10 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
Torres: 20 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
Torres was banned for 20 years according to the link.
Corrected, thanks!
It’s kind of jarring to read that someone has been banned for “violating a norm”—that word to me implies that they’re informal agreements between the community. Why not call them “rules”?
LukeDing (and their associated alt account) has been banned for six months, due to voting & multiple-account-use violations. We believe that they voted on the same comment/post with two accounts more than two hundred times. This includes several instances of using an alt account to vote on their own comments.
This is against our Forum norms on voting and using multiple accounts. We will remove the duplicate votes.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.
We also want to add:
LukeDing appealed the decision; we will reach out to them and ask them if they’d like us to feature a response from them under this comment.
As some of you might realize, some people on the moderation team have conflicts of interest with LukeDing, so we wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed weren’t aware of LukeDing’s identity (they only saw anonymized information).
Is more information about the appellate process available? The guide to forum norms says “We’re working on a formal process for reviewing submissions to this form, to make sure that someone outside of the moderation team will review every submission, and we’ll update this page when we have a process in place.”
The basic questions for me would include: information about who decides appeals, how much deference (if any) the adjudicator will give to the moderators’ initial decision—which probably should vary based on the type of decision at hand, and what kind of contact between the mods and appellate adjudicator(s) is allowed. On the last point, I would prefer as little ex parte contact if possible, and would favor having an independent vetted “advocate for the appellant” looped in if there needs to be contact to which the appellant is not privy.
Admittedly I have a professional bias toward liking process, but I would err on the side of more process than less where accounts are often linked to real-world identities and suspensions are sometimes for conduct that could be seen as dishonest or untrustworthy. I would prefer public disclosure of an action taken in cases like this only after the appellate process is complete for the same reasons, assuming the user timely indicates a desire to appeal the finding of a norm violation.
Finally, I commend keeping the moderators deciding whether a violation occurred blinded as to the user’s identity as a best practice in cases like this, even where there are no COIs. It probably should be revealed prior to determining a sanction, though.
It does intuitively seem like an immediate temporary ban, made public only after whatever appeals are allowed have been exhausted, should give the moderation team basically everything they need while being more considerate of anyone whose appeals are ultimately upheld (i.e. innocent, or mitigating circumstances).
Quick update: we’ve banned Defacto, who we have strong reason to believe is another sockpuppet account for Charles He. We are extending Charles’s ban to be indefinite (he and others can appeal if they want to).
You can find more on our rules for pseudonymity and multiple accounts here. If you have any questions or concerns about this, please also feel free to reach out to us at forum-moderation@effectivealtruism.org.
Just a quick note to say that we’ve removed a post sharing a Fermi estimate of the chances that the author finds a partner who matches their preferred characteristics and links to a date-me doc.
The Forum is for discussions about improving the world, and a key norm we highlight is “Stay on topic.” This is not the right space for coordinating dating. (Consider exploring LessWrong, ACX threads/classifieds, or EA-adjacent Facebook/Reddit/Discord groups for discussions that are primarily social.)
We’re not taking any other action about the author, although I’ve asked them to stay on topic in the future.
We’re issuing [Edit: identifying information redacted] a two-month ban for using multiple accounts to vote on the same posts and comments, and in one instance for commenting in a thread pretending to be two different users. [Edit: the user had a total of 13 double-votes, most far apart and are likely accidental, two upvotes close together on others’ posts (which they claim are accidental as well), but two cases of deliberate self upvote from alternative accounts]
This is against the Forum norms around using multiple accounts. Votes are really important for the Forum: they provide feedback to authors and signal to readers what other users found most valuable, so we need to be particularly strict in discouraging this kind of vote manipulation.
A note on timing: the comment mentioned above is 7 months old but went unnoticed at the time, a report for it came in last week and triggered this investigation.
If [Edit: redacted] thinks that this is not right, he can appeal. As a reminder, bans affect the user, not the account.
[Edit: We have retroactively decided to redact the user’s name from this early message, and are currently rethinking our policies on the matter]
Do suspended users get a chance to make a public reply to the mod team’s findings? I don’t think that’s always necessary—e.g., we all see the underlying conduct when public incivility happens—but I think it’s usually warranted when the findings imply underhanded behavior (“pretending”) and the underlying facts aren’t publicly observable. There’s an appeal process, but that doesn’t address the public-reputation interests of the suspended person.
[A moderator had edited this comment to remove identifying information, after a moderation decision to retroactively redact the user’s identification]
Just quickly noting that none of the double-votes were on that thread or similar ones, as far as I know.
I guess it makes sense that people who disagree with the norms are more likely to do underhanded things to violate them.
We’ve banned Vee from the Forum for 1 year. Their content seems to be primarily or significantly AI-generated,[1] and it’s not clear that they’re using it to share thoughts they endorse and have carefully engaged with. (This had come up before on one of their posts.) Our current policy on AI-generated content makes it clear that we’ll be stricter when moderating AI-generated content. Vee’s content doesn’t meet the standards of the Forum.
If Vee thinks that this is not right, they can appeal. If they come back, we’ll be checking to make sure that their content follows Forum norms. As a reminder, bans affect the user, not the account.
Different detectors for AI content are giving this content different scores, but we think that this is sufficiently likely true to act on.
It’s hard to be certain that something is AI-generated, and I’m not very satisfied with our processes or policies on this front. At the same time, the increase in the number of bots has made dealing with spam or off-topic/troll contributions harder, and I think that waiting for something closer to certainty will have costs that are too high.
Update, we have unbanned Vee. We are new to using AI detection tools and we made a mistake. We apologize.
Moderation update: We have indefinitely banned 8 accounts[1] that were used by the same user (JamesS) to downvote some posts and comments from Nonlinear and upvote critical content about Nonlinear. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
my_bf_is_hot, inverted_maslow, aht_me, emerson_fartz, daddy_of_upvoting, ernst-stueckelberg, gpt-n, jamess
Was emerson_fartz an acceptable username in the first place? (It may not have had a post history in which case no one may have noticed its existence before the sockpuppeting detection, but that sounds uncivil toward a living person)
It was not, and indeed it was only used for voting, so we noticed it only during this investigation
Moderation update: We have banned “Richard TK” for 6 months for using a duplicate account to double-vote on the same posts and comments. We’re also banning another account (Anin, now deactivated), which seems to have been used by that same user or by others to amplify those same votes. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
Moderation update: A new user, Bernd Clemens Huber, recently posted a first post (“All or Nothing: Ethics on Cosmic Scale, Outer Space Treaty, Directed Panspermia, Forwards-Contamination, Technology Assessment, Planetary Protection, (and Fermi’s Paradox)”) that was a bit hard to make sense of. We hadn’t approved the post over the weekend and hadn’t processed it yet, when the Forum team got an angry and aggressive email today from the user in question calling the team “dipshits” (and providing a definition of the word) for waiting throughout the weekend.
If the user disagrees with our characterization of the email, they can email us to give permission for us to share the whole thing.
We have decided that this is not a promising start to the user’s interactions on the Forum, and have banned them indefinitely. Please let us know if you have concerns, and as a reminder, here are the Forum’s norms.
Moderation update:
I’m indefinitely banning JasMaguire for an extremely racist comment that has since been deleted. We’ll likely revisit and update our forum norms to explicitly discourage this sort of behavior.
Please feel free to get in touch with forum-moderation@effectivealtruism.org if you have any concerns.
Moderation update:
We have strong reason to believe that Charles He used multiple new accounts to violate his earlier 6-month-long ban. We feel that this means that we cannot trust Charles He to follow this forum’s norms, and are banning him from the Forum for the next 10 years (until December 20, 2032).
We have already issued temporary suspensions to several suspected duplicate accounts, including one which violated norms about rudeness and was flagged to us by multiple users. We will be extending the bans for each of these accounts to mirror Charles’s 10-year ban, but are giving the users an opportunity to message us if we have made any of those temporary suspensions in error (and have already reached out to them). While we aren’t >99% certain about any single account, we’re around 99% that at least one of these is Charles He.
You can find more on our rules for pseudonymity and multiple accounts here. If you have any questions or concerns about this, please also feel free to reach out to us at forum-moderation@effectivealtruism.org.
I find this reflects worse on the mod team than Charles. This is nowhere near the first time I’ve felt this way.
Fundamentally, it seems the mod team heavily prioritizes civility and following shallow norms above enabling important discourse. The post on forum norms says a picture of geese all flying in formation and in one direction is the desirable state of the forum; I disagree that this is desirable. Healthy conflict is necessary to sustain a healthy community. Conflict sometimes entails rudeness. Some rudeness here and there is not a big deal and does not need to be stamped out entirely. This also applies to the people who get banned for criticizing EA rudely, even when they’re criticizing EA for its role in one of the great frauds of modern history. Banning EA critics for minor reasons is a short-sighted move at best.
Banning Charles for 10 years (!!) for the relatively small crime of evading a previous ban is a seriously flawed idea. Some of his past actions like doxxing someone (without any malice I believe) are problematic and need to be addressed, but do not deserve a 10 year ban. Some of his past comments, especially farther in the past, have been frustrating and net-negative to me, but these negative actions are not unrelated to some of his positive traits, like his willingness to step out of EA norms and communicate clearly rather than like an EA bot. The variance of his comments has steadily decreased over time. Some of his comments are even moderator-like, such as when he warned EA forum users not to downvote a WSJ journalist who wasn’t breaking any rules. I note that the mod team did not step in there to encourage forum norms.
I also find it very troubling that the mod team has consistent and strong biases in how it enforces its norms and rules, such as not taking any meaningful action against an EA in-group member for repeated and harmful violations of norms but banning an EA critic for 20 years for probably relatively minor and harmless violations. I don’t believe Charles would have received a similar ban if he was an employee of a brand name EA org or was in the right social circles.
Finally, as Charles notes, there should be an appeals process for bans.
can you give some examples of this?
Various comments made by this user in multiple posts some time ago, some of which received warnings by mods but nothing beyond that.
I don’t think repeatedly evading moderator bans is a “relatively small crime”. If Forum moderation is to mean anything at all, it has to be consistently enforced, and if someone just decides that moderation doesn’t apply to them, they shouldn’t be allowed to post or comment on the Forum.
Charles only got to his 6 month ban via a series of escalating minor bans, most of which I agreed with. I think he got a lot of slack in his behaviour because he sometimes provided significant value, but sometimes (with insufficient infrequency) behaved in ways that were seriously out of kilter with the goal of a healthy Forum.
I personally think the 10-year thing is kind of silly and he should just have been banned indefinitely at this point, then maybe have the ban reviewed in a little while. But it’s clear he’s been systematically violating Forum policies in a way that requires serious action.
I have no idea if this was intentional on the part of the moderators, but they aren’t all flying in the same direction. ;-)
It makes a lot of difference to me that Charles’ behavior was consistently getting better. If someone consistently flouts norms without any improvement, at some point they should be indefinitely banned. This is not the case with Charles. He started off with really high variance and at this point has reached a pretty tolerable amount. He has clearly worked on his actions. The comments he posted while flouting the mods’ authority generally contributed to the conversation. There are other people who have done worse things without action from the mod team. Giving him a 10 year ban without appeal for this feels more motivated by another instance of the mod team asserting their authority and deciding not to deal with messiness someone is causing than a principled decision.
I think this is probably true. I still think that systematically evading a Forum ban is worse behaviour (by which I mean, more lengthy-ban-worthy) than any of his previous transgressions.
I am not personally aware of any, and am sceptical of this claim. Open to being convinced, though.
Indefinite suspension with leave to seek reinstatement after a stated suitable period would have been far preferable to a 10-year ban. A tenner isn’t necessary to vindicate the moderators’ authority, and the relevant conduct doesn’t give the impression of someone for whom the passage of ten years’ time is necessary before there is a reasonable probability that would they have become a suitable participant during the suspension.
Totally unrelated to the core of the matter, but do you intend to turn this into a frontpage post? I’m a bit inclined to say it’d be better for transparency, and to inform others about the bans, and deter potential violators.… but I’m not sure, maybe you have a reason for preferring the shortform (or you’ll publish periodical updates on the frontpage
In other forums and situations, there is a grace period where a user can comment after receiving a very long ban. I think this is a good feature that has several properties with long term value.
These accounts are some of these accounts I created (but not all[1]):
anonymous-for-unimpressive-reasons
making-this-account (this was originally “making this account feels almost as bad as pulling a Holden,” but was edited by the moderators afterwards).
to-be-stuck-inside-of-mobile
worldoptimization-was-based
Here are some highlights of some of the comments made by the accounts, within about a 30 day period.
Pointing out the hollowness of SBF’s business, which then produced a follow up comment, which was widely cited outside the forum, and may have helped generate a media narrative about SBF.
Jabbing at some dismal public statements of Eliezer Yudkowsky’s, and malign dynamics revealed by this episode. (Due to time limitations, I did not elaborate on the moral and intellectual defects of his justifications of keeping FTX funding, which to my amazement and disappointment, got hundreds of upvotes and no substantive dissension).
In a moderate way, exploring (blunting?) Oliver’s ill-advised (destructive?) strategy of radical disclosure.
A post making EAs aware of a major article revealing inside knowledge of SBF within EA, and this post was on a net, a release of tension in the EA community.
Trying to alleviate concerns about CEA’s solvency, and giving information about the nature of control and financing of CEA.
Defending Karnofsky and Moskovitz and making fun of them (this comment was the only comment Moskovitz has responded to in EA history so far).
Discouraging EA forum users from downvoting out of hand or creating blacklists/whitelists of journalists.
My alternate accounts were created successively, as they were successively banned. This was the only reason for subterfuge, which I view as distasteful.
I have information on the methods that the CEA team used to track my accounts (behavioral telemetry, my residential IP). This is not difficult to defeat. Not only did I not evade these methods, but I gave information about my identity several times (resulting in a ban each time). These choices, based on my distaste, is why the CEA team is “99% certain” (and at least, in a mechanical sense) why I have this 10 year ban.
Other accounts not listed, were created or used for purposes that I view as good, and are not relevant to the substance of the comment.
The only warning received on any of my alternate accounts was here:
This was a warning in response to my comment insulting another user. The user being insulted was Charles He.
I believe I am able to defend each of the actions on my previous bans individually (but never have before this). More importantly, I always viewed my behavior as a protest.
At this point, additional discussions are occurring by CEA[1], such as considering my ban from EAG and other EA events. By this, I’ll be joining blacklists of predators and deceivers.
As shown above, my use of alternate accounts did not promote or benefit myself in any way (even setting aside expected moderator action). Others in EA have used sock puppets to try to benefit their orgs, and gone on to be very successful.
Note that the moderator who executed the ban above, is not necessarily involved in any way in further action or policy mentioned in my comments. Four different CEA staff members have reached out or communicated to me in the last 30 days.
Moderation update:
We have strong reason to believe that Torres (philosophytorres) used a second account to violate their earlier ban. We feel that this means that we cannot trust Torres to follow this forum’s norms, and are banning them for the next 20 years (until 1 October 2042).
Moderation update:
Around a month ago, a post about the authorship of Democratising Risk got published. This post got taken down by its author. Before this happened, the moderation team had been deciding what to do with some aspects of the post (and the resulting discussion) that had violated Forum norms. We were pretty confident that we’d end up banning two users for at least a month, so we banned them temporarily while we sorted some things out.
One of these users was Throwaway151. We banned them for posting something a bit misleading (the post seemed to overstate its conclusions based on the little evidence it had, and wasn’t updated very quickly based on clear counter-evidence), and being uncivil in the comments. Their ban has passed, now. As a reminder, bans affect the user, not the account, so any other accounts Throwaway151 operated were also affected. The other user was philosophytorres — see the relevant update.
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.
For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll select donating, e.g., $1000 to AMF, and 20 friends or relatives you like; the tool will write 20 draft messages (you can select different templates the tool will suggest you… there’s probably someone already doing this with ChatGPT), one for each of them, including a card certifying that you donated $50 to AMF in honor of their birthday, and send the message on the corresponding date (the tool could let you revise it one day before it). There should be some options to customize messages and charities (I think it might be important that you choose a charity that the other person would identify with a little bit—maybe Every.org would be more interested in it than GWWC). So you’ll save a lot of time writing nice birthday messages for those you like. And, if you only select effective charities, you could deduce that amount from your pledge.
Is there anything like that already?
A couple of weeks ago I blocked all mentions of “Effective Altruism”, “AI Safety”, “OpenAI”, etc from my twitter feed. Since then I’ve noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!
throw e/acc on there too
Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).
This post recaps a survey about EA ‘meta’ topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year’s Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy’s Global Catastrophic Risk Capacity Building team. (The event has previously gone by the name ‘Leaders Forum.’)
This post received less attention than I thought it would, so I’m bumping it here to make it a bit more well-known that this survey summary exists. All feedback is welcome!
I agree. Of all of CEA’s outputs this year, I think this could be the most useful for the community and I think it’s worth bumping. It’s our fault that it didn’t get enough traction; it came out just before EAG and we didn’t share it elsewhere.
(As someone who filled out the survey, I thought the framing of the questions was pretty off, and I felt like that jeopardized a lot of the value of the questions. I am not sure how much better you can do, I think a survey like this is inherently hard, but I at least don’t feel like the survey results would help someone understand what I think much better)
Thanks, Oli. Yes, I don’t think we nailed it with the questions and as you say, that’s always hard to do. Appreciate you adding this context for readers.
There is still plenty of time to vote in the Donation Election. The group donation pot currently stands at around $30,000. You can nudge that towards the projects you think are most worthwhile (plus, the voting system is fun and might teach you something about your preferences).
Also- you should donate to the Donation Election fund if:
a) You want to encourage thinking about effective donations on the Forum.
b) You want to commit to donating in line with the Forum’s preferences.
c) You’d like me to draw you one of these bad animals (or earn one of our other rewards):
NB: I can also draw these animals holding objects of your choice. Or wearing clothes. Anything is possible.
Voted because of this, thanks for the nudge!
Thanks for letting me know Kirsten! Good way to start the day :)
Relatedly, here are some Manifold Markets about whether the Donation Election Fund will reach:
$40K
$50K
$75K
$100K
(not well thought-out musings. I’ve only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn’t want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven’t we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don’t see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe’s agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of “civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology” is consistent with reality of “we don’t observe any signs of extraterrestrial intelligence.”
Utilitarianism.net is currently down.
Looks okay to me now. How is it for you?
Thoughts on the OpenAI Board Decisions
A couple months ago I remarked that Sam Bankman-Fried’s trial was scheduled to start in October, and people should prepare for EA to be in the headlines. It turned out that his trial did not actually generate much press for EA, but a month later EA is again making news as a result of recent Open AI board decisions.
A couple quick points:
It is often the case that people’s behavior is much more reasonable than what is presented in the media. It is also sometimes the case that the reality is even stupider than what is presented. We currently don’t know what actually happened, and should hold multiple hypotheses simultaneously.[1]
It’s very hard to predict the outcome of media stories. Here are a few takes I’ve heard; we should consider that any of these could become the dominant narrative.
Vinod Khosla (The Information): “OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence”
John Thornhill (Financial Times): One entrepreneur who is close to OpenAI says the board was “incredibly principled and brave” to confront Altman, even if it failed to explain its actions in public. “The board is rightly being attacked for incompetence,” the entrepreneur told me. “But if the new board is composed of normal tech people, then I doubt they’ll take safety issues seriously.”
The Economist: “The chief lesson is the folly of policing technologies using corporate structures … Fortunately for humanity, there are bodies that have a much more convincing claim to represent its interests: elected governments”
The previous point notwithstanding, people’s attention spans are extremely short, and the median outcome of a news story is ~nothing. I’ve commented before that FTX’s collapse had little effect on the average person’s perception of EA, and we might expect a similar thing to happen here.[2]
Animal welfare has historically been unique amongst EA causes in having a dedicated lobby who is fighting against it. While we don’t yet have a HumaneWatch for AI Safety, we should be aware that people have strong interests in how AI develops, and this means that stories about AI will be treated differently from those about, say, malaria.
It can be frustrating to feel that a group you are part of is being judged by the actions of a couple people you’ve never met nor have any strong feelings about. The flipside of this though is that we get to celebrate the victories of people we’ve never met. Here are a few things posted in the last week that I thought were cool:
The Against Malaria Foundation is in the middle of a nine-month bed net distribution which is expected to prevent 20 million cases of malaria, and about 40,000 deaths. (Rob Mather)
The Shrimp Welfare Project signed an agreement to prevent 125 million shrimps per year from having their eyes cut off and other painful farming practices. (Ula Zarosa)
The Belgian Senate voted to add animal welfare to their Constitution. (Bob Jacobs)
Scott Alexander’s recent post also has a nice summary of victories.
A collection of prediction markets about this event can be found here.
Note that the data collected here does not exclude the possibility that perception of EA was affected in some subcommunities, and it might be the case that some subcommunities (e.g. OpenAI staff) do have a changed opinion, even if the average person’s opinion is unchanged
Just for the record, I think the evidence you cited there was shoddy, and I think we are seeing continued references to FTX in basically all coverage of the OpenAI situation, showing that it did clearly have a lasting effect on the perception of EA.
Reputation is lazily-evaluated. Yes, if you ask a random person on the street what they think of you, they won’t know, but when your decisions start influencing them, they will start getting informed, and we are seeing really very clear evidence that when people start getting informed, FTX is heavily influencing their opinion.
I would guess too that these two events have made it much easier to reference EA in passing. eg I think this article wouldn’t have been written 18 months ago. https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362
So I think there is a real jump of notoriety once the journalistic class knows who you are. And they now know who we are. “EA, the social movement involved in the FTX and OpenAI crises” is not a good epithet.
Thanks! Could you share said evidence? The data sources I cited certainly have limitations, having access to more surveys etc. would be valuable.
The Wikipedia page on effective altruism mentions Bankman-Fried 11 times, and after/during the OpenAI story, it was edited to include a lot of criticism, ~half of which was written after FTX (e.g. it quotes this tweet https://twitter.com/sama/status/1593046526284410880 )
It’s the first place I would go to if I wanted an independent take on “what’s effective altruism?” I expect many others to do the same.
There are a lot of recent edits on that article by a single editor, apparently a former NY Times reporter (the edit log is public). From the edit summaries, those edits look rather unfriendly, and the article as a whole feels negatively slanted to me. So I’m not sure how much weight I’d give that article specifically.
Sure, here are the top hits for “Effective Altruism OpenAI” (I did no cherry-picking, this was the first search term that I came up with, and I am just going top to bottom). Each one mentions FTX in a way that pretty clearly matters for the overall article:
Bloomberg: “What is Effective Altruism? What does it mean for AI?”
“AI safety was embraced as an important cause by big-name Silicon Valley figures who believe in effective altruism, including Peter Thiel, Elon Musk and Sam Bankman-Fried, the founder of crypto exchange FTX, who was convicted in early November of a massive fraud.”
Reddit “I think this was an Effective Altruism (EA) takeover by the OpenAI board”
Top comment: ” I only learned about EA during the FTX debacle. And was unaware until recently of its focus on AI. Since been reading and catching up …”
WSJ: “How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI”
“Coming just weeks after effective altruism’s most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences.”
Wired: “Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’”
“EA is currently being scrutinized due to its association with Sam Bankman-Fried’s crypto scandal, but less has been written about how the ideology is now driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of “AI safety.”
Semafor: “The AI industry turns against its favorite philosophy”
“The first was caused by the downfall of convicted crypto fraudster Sam Bankman-Fried, who was once among the leading figures of EA, an ideology that emerged in the elite corridors of Silicon Valley and Oxford University in the 2010s offering an alternative, utilitarian-infused approach to charitable giving.”
Ah yeah sorry, the claim of the post you criticized was not that FTX isn’t mentioned in the press, but rather that those mentions don’t seem to actually have impacted sentiment very much.
I thought when you said “FTX is heavily influencing their opinion” you were referring to changes in sentiment, but possibly I misunderstood you – if you just mean “journalists mention it a lot” then I agree.
You are also welcome to check Twitter mentions or do other analysis of people talking publicly about EA. I don’t think this is a “journalist only” thing. I will take bets you will see a similar pattern.
I actually did that earlier, then realized I should clarify what you were trying to claim. I will copy the results in below, but even though they support the view that FTX was not a huge deal I want to disclaim that this methodology doesn’t seem like it actually gets at the important thing.
But anyway, my original comment text:
As a convenience sample I searched twitter for “effective altruism”. The first reference to FTX doesn’t come until tweet 36, which is a link to this. Honestly it seems mostly like a standard anti-utilitarianism complaint; it feels like FTX isn’t actually the crux.
In contrast, I see 3 e/acc-type criticisms before that, two “I like EA but this AI stuff is too weird” things (including one retweeted by Yann LeCun??), two “EA is tech-bro/not diverse” complaints and one thing about Whytham Abbey.
And this (survey discussed/criticized here):
I just tried to reproduce the Twitter datapoint. Here is the first tweet when I sort by most recent:
Most tweets are negative, mostly referring to the OpenAI thing. Among the top 10 I see three references to FTX. This continues to be quite remarkable, especially given that it’s been more than a year, and these tweets are quite short.
I don’t know what search you did to find a different pattern. Maybe it was just random chance that I got many more than you did.
I used the default sort (“Top”).
(No opinion on which is more useful; I don’t use Twitter much.)
Top was mostly showing me tweets from people that I follow, so my sense is it was filtered in a personalized way. I am not fully sure how it works, but it didn’t seem the right type of filter.
Yeah, makes sense. Although I just tried doing the “latest” sort and went through the top 40 tweets without seeing a reference to FTX/SBF.
My guess is that this filter just (unsurprisingly) shows you whatever random thing people are talking about on twitter at the moment, and it seems like the random EA-related thing of today is this, which doesn’t mention FTX.
Probably you need some longitudinal data to have this be useful.
Upvoted, I’m grateful for the sober analysis.
I think this is an oversimplification. This effect is largely caused by competing messages; the modern internet optimizes information for memetic fitness e.g. by maximizing emotional intensity or persuasive effect, and people have so much routine exposure to stuff that leads their minds around in various directions that they get wary (or see having strong reactions to anything at all as immature, since a large portion of outcries on the internet are disproportionately from teenagers). This is the main reason why people take things with a grain of salt.
However, overton windows can still undergo big and lasting shifts (this process could also be engineered deliberately long before generative AI emerged, e.g. via clown attacks which exploit social status instincts to consistently hijack any person’s impressions of any targeted concept). The 80,000 hours podcast with Cass Sunstein covered how Overton windows are dominated by vague impressions of what ideas are acceptable or unacceptable to talk about (note: this podcast was from 2019). This dynamic could plausibly strangle EA’s access to fresh talent, and AI safety’s access to mission-critical policy influence, for several years (which would be far too long).
On the flip side, johnswentworth actually had a pretty good take on this; that the human brain is instinctively predisposed to over-focus on the risk of their in-group becoming unpopular among everyone else:
Thanks for the helpful comment – I had not seen John’s dialogue and I think he is making a valid point.
Fair point that the lack of impact might not be due to attention span but instead things like having competing messages.
In case you missed it: Angelina Li compiled some growth metrics about EA here; they seem to indicate that FTX’s collapse did not “strangle” EA (though it probably wasn’t good).
This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: https://manifoldmarkets.notion.site/The-New-Deal-for-Manifold-s-Charity-Program-1527421b89224370a30dc1c7820c23ec
Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months
Millions of people contract pork tapeworm infections annually, which causes ~30% of the ~50 million global active epilepsy cases:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)61353-2/fulltext
Perhaps cultural pork consumption restrictions are onto something:
https://en.wikipedia.org/wiki/Religious_restrictions_on_the_consumption_of_pork
I thought this recent study in JAMA Open on vegan nutrition was worth a quick take due to its clever and legible study design:
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812392
This was an identical twin study in which one twin went vegan for eight weeks, and the other didn’t. Nice results on some cardiometabolic lab values (e.g., LDL-C) even though the non-vegan twin was also upping their game nutritionally. I don’t think the fact that vegan diets generally improve cardiometabolic health is exactly fresh news, but I find the study design to be unusually legible for nutritional research.
The following table is from Scott Alexander’s post, which you should check out for the sources and (many, many) caveats.
I was watching the recent DealBook Summit interview with Elon Musk, and he said the following about OpenAI (emphasis mine):
I’m posting here because I remember reading a claim that Elon started OpenAI after getting bad vibes from Demis Hassabis. But he claims that his actual motivation was that Larry Page is an extinctionist. That seems like a better reason.
By the time Musk (and Altman et al) was starting OA, it was in response to Page buying Hassabis. So there is no real contradiction here between being spurred by Page’s attitude and treating Hassabis as the specific enemy. It’s not like Page was personally overseeing DeepMind (or Google Brain) research projects, and Page quasi-retired about a year after the DM purchase anyway (and about half a year before OA officially became a thing).
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn’t find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI “proposing” the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.
I’ve talked to some people who are involved with OpenAI secondary markets, and they’ve broadly corroborated this.
One source told me that after a specific year (didn’t say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.
As of January 2023, the institutional markets were not predicting AGI within 30 years.