Reflection on my time as a Visiting Fellow at Rethink Priorities this summer
I was a Visiting Fellow at Rethink Priorities this summer. They’re hiring right now, and I have lots of thoughts on my time there, so I figured that I’d share some. I had some misconceptions coming in, and I think I would have benefited from a post like this, so I’m guessing other people might, too. Unfortunately, I don’t have time to write anything in depth for now, so a shortform will have to do.
Fair warning: this shortform is quite personal and one-sided. In particular, when I tried to think of downsides to highlight to make this post fair, few came to mind, so the post is very upsides-heavy. (Linch’s recent post has a lot more on possible negatives about working at RP.) Another disclaimer: I changed in various ways during the summer, including in terms of my preferences and priorities. I think this is good, but there’s also a good chance of some bias (I’m happy with how working at RP went because working at RP transformed me into the kind of person who’s happy with that sort of work, etc.). (See additional disclaimer at the bottom.)
First, some vague background on me, in case it’s relevant:
I finished my BA this May with a double major in mathematics and comparative literature.
I had done some undergraduate math research, had taught in a variety of contexts, and had worked at Canada/USA Mathcamp, but did not have a lot of proper non-Academia work experience.
I was introduced to EA in 2019.
Working at RP was not what I had expected (it seems likely that my expectations were skewed).
One example of this was how my supervisor (Linch) held me accountable. Accountability existed in such a way that helped me focus on goals (“milestones”) rather than making me feel guilty about falling behind. (Perhaps I had read too much about bad workplaces and poor incentive structures, but I was quite surprised and extremely happy about this fact.) This was a really helpful transition for me from the university context, where I often had to complete large projects with less built-in support. For instance, I would have big papers due as midterms (or final exams that accounted for 40% of a course grade), and I would often procrastinate on these because they were big, hard to break down, and potentially unpleasant to work on. (I got really good at writing a 15-page draft overnight.)
In contrast, at Rethink, Linch would help me break down a project into steps (“do 3 hours of reading on X subject,” “reach out to X person,” “write a rough draft of brainstormed ideas in a long list and share it for feedback,” etc.), and we would set deadlines for those. Accomplishing each milestone felt really good, and kept me motivated to continue with the project. If I was behind the schedule, he would help me reprioritize and think through the bottlenecks, and I would move forward. (Unless I’m mistaken, managers at RP had taken a management course in order to make sure that these structures worked well — I don’t know how much that helped because I can’t guess at the counterfactual, but from my point of view, they did seem quite prepared to manage us.)
Another surprise: Rethink actively helped me meet many (really cool) people (both when they did things like give feedback, and through socials or 1-1’s). I went from ~10 university EA friends to ~25 people I knew I could go to for resources or help. I had not done much EA-related work before the internship (e.g. my first EA Forum post was due to RP), but I never felt judged or less respected for that. Everyone I interacted with seemed genuinely invested in helping me grow. They sent me relevant links, introduced me to cool new people, and celebrated my successes.
I also learned a lot and developed entirely new interests. My supervisor was Linch, so it might be unsurprising that I became quite interested in forecasting and related topics. Beyond this, however, I found the work really exciting, and explored a variety of topics. I read a bunch of economics papers and discovered that the field was actually really interesting (this might not be a surprise to others, but it was to me!). I also got to fine-tune my understanding of and opinions on a number of questions in EA and longtermism. I developed better work (or research) habits, gained some confidence, and began to understand myself better.
Here’s what I come up with when I try to think of negatives:
I struggled to some extent with the virtual setting (e.g. due to tech or internet issues). Protip: if you find yourself with a slow computer, fix that situation asap.
There might have been too much freedom for me— I probably spent too long choosing and narrowing my next project topics. Still, this wasn’t purely negative; I think I ended up learning a lot during the exploratory interludes (where I went on deep-dives into things like x-risks from great power conflict, but they did not help me produce outputs). As far as I know, this issue is less relevant for more senior positions, and a number of more concrete projects are more straightforwardly available now. (It also seems likely that I could have mitigated this by realizing it would be an issue right away.)
I would occasionally fall behind and become stressed about that. A few tasks became ugh fields. As the summer progressed, I think I got better about immediately telling Linch when I noticed myself feeling guilty or unhappy about a project, and this helped a lot.
Opportunity cost. I don’t know exactly what I would have done during the summer if not RP, but it’s always possible it would have been better.
Obviously, if I were restarting the summer, I would do some things differently. I might focus on producing outputs faster. I might be more active in trying to meet people. I would probably organize my daily routine differently. But some of the things I list here are precisely changes in my preferences or priorities that result from working at RP. :)
I don’t know if anyone will have questions, but feel free to ask questions if you do have any. But I should note that I might not be able to answer many, as I’m quite low on free time (I just started a new job).
Note: nobody pressured me to write this shortform, although Linch & some other people at RP did know I was doing it and were happy for it. For convenience, here’s a link to RP’s hiring page.
Thanks for writing this Lizka! I agree with many of the points in this [I was also a visiting fellow on the longtermist team this summer]. I’ll throw my two cents in about my own reflections (I broadly share Lizka’s experience, so here I just highlight the upsides/downsides things that especially resonated with me, or things unique to my own situation):
Vague background:
Finished BSc in PPE this June
No EA research experience and very little academic research experience
Introduced to EA in 2019
Upsides:
Work in areas that are intellectually stimulating and feel meaningful (e.g. Democracy, AI Governance).
Working with super cool people. Everyone was super friendly, and clearly supportive of our development as researchers. I also had not written an EA forum post before RP, but was super supported to break this barrier.
Downsides:
Working remotely was super challenging for me. I underestimated how significant a factor this would be to begin with, and so I would not dismiss this lightly. However, I think there are ways that one can remedy this if they are sufficiently proactive/agent-y (e.g. setting up in-person co-working, moving cities to be near staff, using Focusmate, etc). Also, +1 to getting a fast computer (and see Peter’s comment on this).
Imposter syndrome. One downside of working with super cool, brilliant, hard working people was (for me) a feeling that I was way out of my depth, especially to begin with. This is of course different for everyone, but one thing I struggled to fully overcome. However, RP staff are very willing to help out where they can, should this become a problem.
Ugh fields. There were definitely times when I felt somewhat overwhelmed by work, with sometimes negative spirals. This wasn’t helped by personal circumstances, but my manager (Michael) was super accommodating and understanding of this, which helped alleviate guilt.
If it’s helpful, I might write-up a shortform on some of these points in more depth, especially the things I learnt about being a better researcher, if that’s helpful for others.
Overall, I also really enjoyed my time at RP, and would highly recommend :)
(I did not speak to anyone at RP before writing this).
Thanks a lot for writing about your experiences, Lizka and Tom! Especially the details about why you were happy with your managers was really valuable info for me.
Protip: if you find yourself with a slow computer, fix that situation asap.
Note to onlookers that we at Rethink Priorities will pay up to $2000 for people to upgrade their computers and that we view this as very important! And if you work with us for more than a year, you can keep your new computer forever.
I realize that this policy may not be a great fit for interns / fellows though, so perhaps I will think about how we can approach that.
I think we should maybe just send a new mid-end chromebook + high-end headsets with builtin mic + other computing supplies to all interns as soon as they start (or maybe before), no questions asked. Maybe consider higher end equipment for interns who are working on more compute-intensive stuff and/or if they or their managers asked for it.
For some of the intern projects (most notably on the survey team?), more computing power is needed, but since so much of RP work involves Google docs + looking stuff up fast on the internet + Slack/Google Meet comms, the primary technological bottlenecks that we should try to solve is really fast browsing/typing/videocall latency and quality, which chromebooks and headsets should be sufficient for.
(For logistical reasons I’m assuming that the easiest thing to do is to let the interns keep the chromebook and relevant accessories)
For context, lead poisoning seems to get ~$11-15 million per year right now, and has a huge toll. I’m really excited about this news.
Also, thanks to @ryancbriggs for pointing out that this seems like “a huge win for risky policy change global health effective altruism” and referencing this grant:
In December 2021, GiveWell (or the EA Funds Global Health and Development Fund?) gave a grant to CGD to “to support research into the effects of lead exposure on economic and educational outcomes, and run a working group that will author policy outreach documents and engage with global policymakers.” In their writeup, they recorded a 10% “best case” forecast that in two years (by the end of the grant period), “The U.S. government, other international actors (e.g., bilateral and multilateral donors), and/or national LMIC governments take measurable action to reduce lead exposure—for example, through increased funding for lead mitigation and research, increased monitoring of lead exposure, and/or enactment of regulations.” We’ve reached this best case and it’s been almost exactly two years! (Attributing credit is really hard and I have no experience and little context in this area — as far as I know this could have happened without that grant or related advocacy. But it’s still notable to me that a CGD report is cited in Power’s announcement.)
This is awesome! Is there a page somewhere that collates the results of a bunch of internal forecasting by the end of the grant period? I’d be interested
(This was initially meant as part of this post,[1] but while editing I thought it didn’t make a lot of sense there, so I pulled it out.)
I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I’d try to share some notes.
...
It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think.
Here’s a list of things that I believe about criticism:
Criticism or critical information can be extremely valuable. It can be hard for people to surface criticism (e.g. because they fear repercussions), which means criticism tends to be undersupplied.[3] Requiring critics to present their criticisms in specific ways will likely stifle at least some valuable criticism. It can be hard to get yourself to engage with criticism of your work or things you care about. It’s easy to dismiss true and important criticism without noticing that you’re doing it.
→ Making sure that your community’s culture appreciates criticism (and earnest engagement with it), tries to avoid dismissing critical content based on stylistic or other non-fundamental qualities, encourages people to engage with it, and disincentivizes attempts to suppress it can be a good way to counteract these issues.
At the same time, trying to actually do anything is really hard.[4] Appreciation for doers is often undersupplied. Being in leadership positions or engaging in public discussions is a valuable service, but opens you up to a lot of (often stressful) criticism, which acts as a disincentive for being public. Psychological safety is important in teams (and communities), so it’s unfortunate that critical environments lead more people to feel like they would be judged harshly for potential mistakes. Not all criticism is useful enough to be worth engaging with (or sharing). Responding to criticism can be time-consuming or otherwise costly and isn’t always worth it.[5] Sometimes people who are sharing “criticism” hate the project for reasons that aren’t what’s explicitly stated, or just want to vent or build themselves up.[6]
… and cultures like the one described above can exacerbate these issues.
This was in that post because I ended up engaging with a lot of discussion about the effects of criticism in EA (and of the EA Forum’s critical culture) as part of running a Criticism Contest (and generally working on CEA’s Online Team).
I’ve experienced first-hand how hard it is to identify flaws in projects you’re invested in, I’ve seen how hard it is for some people to surface critical information, and noticed some ways in which criticism can be shut down or disregarded by well-meaning people.
I would be excited about this and have wondered for a while if we should have EA awards. This Washington post article brought the idea to my mind again:
Civil servants who screwed up were dragged before Congress and into the news. Civil servants who did something great, no one said a word about. There was thus little incentive to do something great, and a lot of incentive to hide. The awards were meant to correct that problem. “There’s no culture of recognition in government,” said Max Stier, whom Heyman hired to run the Partnership. “We wanted to create a culture of recognition.”
(This was initially meant as part of this post[1], but I thought it didn’t make a lot of sense there, so I pulled it out.)
“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious.
When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I’ve made a mistake like this.
I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other “mistakes” I’ve made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things.
This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I’d identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I’d received.
...
This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with.
This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction.
It was there because my role gave me the opportunity to actually notice a lot of the mistakes I was making (something that I think is harder if you’re working on something like research, or in a less public role), which also meant I could reflect on them.
“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders”
After reading your post, I wasn’t sure you were right about this. But after thinking about it for a few minutes, I can’t come up with any serious mistakes I’ve made that were “point-in-time blunders”.
The closest thing I can think of is when I accidentally donated $20,000 to the GiveWell Community Foundation instead of The Clear Fund (aka GiveWell), but fortunately they returned the money so it all worked out.
If you feel overwhelmed by FTX-collapse-related content on the Forum, you can hide most of it by using a tag filter: hover over the “FTX collapse” tag on the Frontpage (find it to the right of the “Frontpage Posts” header), and click on “Hidden.”
[Note: this used to say “FTX crisis,” and that might still show up in some places.]
On October 27, 1962 (during the Cuban Missile Crisis), the Russian diesel-powered submarine B-59 started experiencing[1] nearby depth charges from US forces above them; the submarine had been detected and US ships seemed to be attacking. The submarine’s air conditioning was broken,[2] CO2 levels were rising, and B-59 was out of contact with Moscow. Two of the senior officers on the submarine, thinking that a global war had started, wanted to launch their “secret weapon,” a 10-kiloton nuclear torpedo. The captain, Valentin Savistky, apparently exclaimed: “We’re gonna blast them now! We will die, but we will sink them all — we will not become the shame of the fleet.”
The ship was authorized to launch the torpedo without confirmation from Moscow, but all three senior officers on the ship had to agree.[3] Chief of staff of the flotilla Vasili Arkhipov refused. He convinced Captain Savitsky that the depth charges were signals for the Soviet submarine to surface (which they were) — if the US ships really wanted to destroy the B-59, they would have done it by now. (Part of the problem seemed to be that the Soviet officers were used to different signals than the ones the Americans were using.) Arkhipov calmed the captain down[4] and got him to surface the submarine to get orders from the Kremlin, which ended up eventually defusing the situation.
The B-59 was apparently the only submarine in the flotilla that required three officers’ approval in order to fire the “special weapon” — the others only required the captain and the political offer to approve the launch.
From skimming some articles and first-hand accounts, it seems unclear if the captain just had an outburst and then accurately wanted to follow protocol (and use the missile), or if he was truly reacting irrationally/emotionally because of the incredibly stressful environment. Accounts conflict a bit, and my sense is that orders around using the missile were unclear and overly permissive (or even encouraging towards using the missile).
For anyone interested in watching a dramatic reconstruction of this incident, go to timestamp 43:30–47:05 of The Man Who Saved The World. (I recommend watching at 1.5x speed.)
What do you think the final voting mechanism should be, and why? E.g. approval voting, ranked-choice voting, quadratic voting, etc.
Considerations might include: how well this will allocate funds based on real preferences, how understandable it is to people who are participating in the Donation Election or following it, etc.
I realize that I might be opening a can of worms, but I’m looking forward to reading any comments! I might not have time to respond.
Users will be able to “pre-vote” (to signal that they’re likely to vote for some candidates, and possibly to follow posts about some candidates), for as many candidates as they want. The pre-votes are anonymous (as are final votes), but the total numbers will be shown to everyone. There will be a separate process for final voting, which will determine the three winners in the election. The three winners will receive the winnings from the Donation Election Fund, split proportionally based on the votes.
Only users who had an account as of October 22, 2023, will be able to vote, unfortunately. We’ve had to add this restriction to avoid election manipulation (we’ll also be monitoring in other ways). I realize that this limits genuine new users’ ability to vote, but hopefully the fact that newer users can participate in other ways (like by encouraging others to vote for some candidates, or by donating to candidates/the Donation Election Fund) helps a bit.
A quick preview of the pre-votes, in case you’re interested:
I’m a researcher on voting theory, with a focus on voting over how to divide a budget between uses. Sorry I found this post late, so probably things are already decided but I thought I’d add my thoughts. I’m going to assume approval voting as input format.
There is an important high-level decision to make first regarding the objective: do we want to pick charities with the highest support (majoritarian) or do we want to give everyone equal influence on the outcome if possible (proportionality)?
If the answer is “majoritarian”, then the simplest method makes the most sense: give all the money to the charity with the highest approval score. (This maximizes the sum of voter utilities, if you define voter utility to be the amount of money that goes to the charities a voter approves.)
If the answer is “proportionality”, my top recommendation would be to drop the idea of having only 3 winners and not impose a limit, and instead use the Nash Product rule to decide how the money is split [paper, wikipedia]. This rule has a nice interpretation where let’s say there are 100 voters, then every voter is assigned 1/100th of the budget and gets a guarantee that this part is only spent on charities that the voter has approved. The exact proportions of how the voter share is used is decided based on the overall popularity of the charities. This rule has various nice properties, including Pareto efficiency and strong proportionality properties (guaranteeing things like “if 30% of voters vote for animal charities, then 30% of the budget will be spent on animal charities”).
If you want to stick with the 3 winner constraint, there is no academic research about this exact type of voting situation. But if proportionality is desired, I would select the 3 winners as not the 3 charities with the highest vote score, but instead use Proportional Approval Voting [wikipedia] to make the selection. This would avoid the issue that @Tetraspace identified in another comment, where there is a risk that all 3 top charities are similar and belong to the largest subgroup of voters. Once the selection of 3 charities is done, I would not split the money in proportion to approval scores but either (a) split it equally, or (b) normalize the scores so that a voter who approved 2 of the 3 winners contributes 0.5 points to each of them, instead of 1 point to each. Otherwise those who approved 2 out of 3 get higher voting weight.
This maximizes the sum of voter utilities, if you define voter utility to be the amount of money that goes to the charities a voter approves
This definition of “voter utility” feels very different to how EAs think about charities: the definition would imply that you are indifferent between all charities that you approve of.
A better definition of “voter utility” would take into account the relative worth of the charities (eg a voter might think that charitiy A is 3x better than charity B, which is 5x better than charity C).
I think since there can be multiple winners, letting people vote on the ideal distribution then averaging those distributions would be better than direct voting, since it most directly represents “how voters think the funds should be split on average” or similar, which seems like what you want to capture? And also is still very understandable I hope.
E.g. if I think 75% of the pool should go to LTFF and 20% to GiveWell, and 5% to the EA AWF, 0% to all the rest, I vote 75%/20%/5%/0%/0%/0% etc. Then, you take the average of those distributions across all voters. I guess it gets tricky if you are only paying out to the top three, but maybe you can just scale their percentage splits? IDK.
If not that or if it is annoying to implement, IMO approval voting or quadratic are probably best, but am not really sure. Ranked choice feels like it is so explicitly designed for single winner elections that it is harder to apply here.
If we’re thinking of it as “ideally I’d like 75% of the money to go here, 20% here, etc” we could just give people 100 votes each and give money to the top 3?
This would be very similar to first-past-the-post (third-past-the-post in this case), and has many of the same drawbacks as first-past-the-post, such as lots of strategic voting. Giving a voice to people who’s favorite charities are not wildly popular seems preferable (as would be the case with ranked-choice voting). The fact that you have 100 votes instead of 1 vote doesnt make much of a difference here (imagine a country where everyone has 99 clones, election systems would mostly still have the same advantages and disadvantages).
I think this could be fun. An advantage here is that voters have to think about the relative value of different charities, rather than just deciding which are better or worse. This could also be an important aspect when we want people to discuss how they plan to vote/how others should vote.
If you want to be explicit about this, you could also consider designing the user interface so that users enter these relative differences of charities directly (e.g. “I vote charity A to be 3 times as good as charity B” rather than “I assign 90 vote credits to charity A and 10 vote credits to charity B”).
Note however, that due to the top-3 cutoff, putting in the true relative differences between charities might not be the optimal policy.
A technical remark: If you want only to do payouts for the top three candidates, instead of just relying on the final vote, I think it would be better to rescale the voting credits of each voter after kicking out the charity with the least votes and then repeating the process until there are only 3 charities left. This would reduce tactical voting and would respect voters more who pick unusual charities as their top choices. This process has some similarities with ranked-choice voting. Additionally, users should have the ability to enter large relative differences (or very tiny votes like 1 in a billion), so their votes are still meaningful even after many eliminations.
Approval voting:
I think voting either “approve” or “disapprove” does not match how EAs think about charities.
I generally approve a lot of charities within EA space, but would not vote “approve” for these charities.
I worry that a lot of tactical voting can take place here, especially if people can see the current votes or the pre-votes.
For example, a person who both approves of the 3rd-placed charity and the 4th-placed charity (by overall popularity), might want to switch their vote to “disapprove” for the (according to them) worse charity.
For example, voters are incentivized to give different votes to the 3rd-placed and 4th-placed charity, because there the difference will have the biggest impact on money paid out.
Or a person who disapproves of all the top charities might switch a vote from “disapprove” to “approve” so that their vote matters at all.
Ranked-choice voting:
I am assuming here that the elimination process in ranked-choice stops once you reach the top 3 and that votes are then distributed proportionally. I think this would be a good implementation choice (mostly because proportional voting itself would be a decent choice by itself, so doing it for the top 3 seems reasonable).
Ranking charities could be more satisfying for voters than having to figure out where to draw the line between “approve” and “disapprove”, or putting in lots of numeric values.
Generally, ranked-choice voting seems like an ok choice.
how well will these allocate funds?:
I am quite unsure here, and finding a best charity based on expressed preferences of lots of people with lots of opinions will be difficult in any case.
My best guess here is that ranked-choice voting > quadratic voting > approval voting.
A disadvantage of quadratic voting here is that it can happen that some fraction of the money will be paid out to sub-optimal charities (even if everyone agrees that charity C is worse than A and B, then it will likely still be rational for voters to assign non-zero weight to charity C, corresponding to non-zero payout).
understandability:
I think approval voting is easier to understand than ranked-choice voting, which is easier to understand than quadratic voting. This is both for the user interface and for understanding the whole system.
Also, the mental effort for making a voting decision is less under ranked-choice and approval voting.
I think the precise effects of the voters choices will be difficult to estimate in any system, so keeping
general remarks:
Different voting mechanisms can be useful for different purposes, and paying 3 charities different amounts of money is a different use case than selecting a single president, so not all considerations and analyses of different voting mechanisms will carry over to our particular case.
The top-3 rule will incentivize tactical voting in all these systems (whereas in a purely proportional system there would be no tactical voting). Maybe this number should be increased a bit (especially if we use quadratic voting).
If there are lots of charities to choose from, it will be quite an effort to evaluate all these charities.
Potentially, you could give each voter a small number of charities to compare with each other, and then aggregate the result somehow (although that would be complicated and would change the character of the election).
Or there can be two phases of voting, where the first phase narrows it down to 3-5 charities and then the second phase determines the proportions.
My personal preferences:
Obviously, we should have a meta-vote to select the three top voting methods among user-suggested voting methods and then hold three elections with the respective voting methods, each determining how a fraction of the fund (proportional to the vote that the voting method received in the meta-vote) gets distributed. And as for the voting method for this meta-vote, we should use… ok, this meta-voting stuff was not meant entirely seriously.
In my current personal judgement, I prefer quadratic voting over ranked-choice and ranked-choice over approval voting.
I might be biased here towards more complex systems.
I think an important factor is also that I might like more data about my preferences as a voter:
With quadratic voting, I can express my relative preferences between charities quantitatively.
With ranked-choice voting, I can rank charities, but cannot say by how much I prefer one charity over another.
With approval voting, I can put charities in only two categories.
One issue that comes up with multi-winner approval voting is: suppose there are 15 longtermists and 10 global poverty people. All the longtermists approve the LTFF, MIRI, and Redwood; all the global poverty people approve the Against Malaria Foundation, GiveWell, and LEEP.
The top three vote winners are picked: they’re the LTFF, with 15 votes, MIRI, with 15 votes, and Redwood, with 15 votes.
I’m going to stick my neck out and say that approval voting is the best option here. Why?
It avoids almost all of the problems with plurality voting. In non-pathological arrangments of voter preferences and candidates, it will produce the ‘intuitively’ correct option—see here for some fun visualisations.
It has EA cred, see Aaron Hamlin’s interview on 80k here
And most importantly, it’s understandable and legible—you don’t need people to trust an underlying apportionment algorithm or send the flyers explaining theD’Hondt method to voters or whatever. Just vote for the options you approve of on the ballot. One person, one ballot. Most approvals wins. Simple.
I fear that EAs who are really into this sort-of thing are going to nerd-snipe the whole thing into a discussion/natural experiment about optimal voting systems instead of what would be most practical for this Donation Election. A lot of potential voters and donors may not be interested in using a super fancy optimal but technically involved voting method, and be the kind of small inconvenience that might turn people off the whole enterprise.
Now, before all you Seeing Like a State fans come at me saying how legibility is the devil’s work I think I’m just going to disagree with you pre-emptively.[1] Sometimes there is a tradeoff between fidelity and legibility, and too much weighting on illegible technocracy can engender a lack of trust and have severe negative consequences.
Actually it’s interesting that Glen references Scott as on his side, I think there’s actually some tension between their positions. But that’s probably a topic for another post/discussion
Won’t people be motivated to disapprove vote orgs in all cause areas but their preferred one? That would seemingly reduce approval voting to FPTP as between cause areas in effect.
Well, the top 3 charities will get chosen, so there’s no benefit to you only selecting 1 option alone unless you really do believe only that 1 charity ought to get funded. I think AV may be more robust to these concerns than some think,[1] all I think all voting systems will have these edge cases.
I also may be willing simply bite the bullet here and trade-off a bit of strategic voting for legibility. But again, I don’t think approval is worse than this than many other voting methods.
But my fundamental objection is that this is primarily a normative problem, where we want to be a community who’ll vote honestly and not strategically. If GWWC endorse approval voting, then when you submit your votes there could be a pop-up with “I pledge not to vote strategically” or something like that.
I don’t think any voting system is immune to that—Democracy works well because of the norms it spreads and trust it instills, as opposed to being the optimal transmission mechanism of individual preferences to a social welfare function imho.
Thanks. I assume there will be at least 3 orgs for each cause area.
If we can assume the forum is a “community who’ll vote honestly and not strategically,” approval voting would work—but we shouldn’t limit the winners to three in that case. Proportional representation among all orgs with net positive approval would be the fullest extent of the community’s views, although some floor on support or cap on winners would be necessary for logistical reasons.
I’d prefer a voting mechanism that factored in as much of the vote as possible. I suspect that cause area will be a major determinant of individuals’ votes, and would prefer that the voting structure promote engagement and participation for people with varying cause prioritizations.
Suppose we have 40% for cause A orgs, 25% for cause B orgs, 20% for cause C orgs, and 15% for various smaller causes. I would not prefer a method likely to select three organizations from cause A—I don’t think that outcome would be actually representative of the polis, and voting rules that would lead to such an outcome will discourage engagement and participation from people who sense that their preferred causes are not the leading one.
I’m not sure how to effectuate that preference in a voting system, although maybe people who have thought about voting systems more deeply than I could figure it out. I do think approval voting would be problematic; some voters might strategically disapprove all candidates except in their preferred cause area, which could turn the election into a cause-area election rather than an organization-specific one. Otherwise, it might be appropriate to assign each organization to a cause area, and provide that (e.g.) no more than half of all funds will go to organizations in the same cause area. If that rule were invoked, it would likely require selecting additional organizations than the initial three.
The more I think about this, the more I’d like at least one winner to be selected randomly among orgs that reach a certain vote threshold—unsure if it should be weighted by vote total or equal between orgs. Maybe that org gets 15 to 20 percent of the take? That’s a legible way to keep minority voices engaged despite knowing their preferences won’t end up reflected in the top three.
Proportional voting with some number of votes. between 1 and 10.
If it were me, the thing I’d experiment on is being able to donate votes to someone else. That feels like something I’d like to see more of on a larger scale. I give a vote to Jenifer and Alan, she researches longterm stuff, he looks into animal welfare.
FWIW, I mildly disagree with this, because a major part of the appeal of donation elections stuff (if done well) is that the results more closely model a community consensus than other giving mechanisms, and being able to donate votes would distort that in some sense. I think I don’t see the appeal of being able to donate votes in this context over just telling Jenifer + Alan that they can control where one donates to some extent, or donating to a fund. Or, if not donating to the election fund, just asking Jenifer + Alan for their opinion and changing your own mind accordingly.
Do you intend to have one final winner or would it be ok to pay out the fund to various charities in different proportions (maybe with a minimum payout to avoid logistical hassle)? In the latter case, a consideration could also be proportional voting. But it is not clear how approval voting and ranked choice would work exactly in those cases.
Also, am I understanding correctly that donating more to that fund does not get you additional votes?
We’re planning on having 3 winners, and we’ll allocate the funding proportionally across those three winners. So e.g. if we do approval voting, and candidate A gets 5 votes, B gets 2, C gets 20, and D gets 25, and we’re distributing $100, then A (5 votes), C (20 votes), and D win (25 votes) and we’d send $10 to A, $40 to C, and $50 to D. I think this would straightforwardly work with quadratic voting (each person just has multiple vote-points). I haven’t thought enough about how “proportional” allocation would work with ranked-choice votes.
And yep, donating more to that fund won’t get you additional votes.
Superman gets to business [private submission to the Creative Writing Contest from a little while back]
“I don’t understand,” she repeated. “I mean, you’re Superman.”
“Well yes,” said Clark. “That’s exactly why I need your help! I can’t spend my time researching how to prioritize while I should be off answering someone’s call for help.”
“But why prioritize? Can’t you just take the calls as they come?”
Lois clicked “Send” on the email she’d been typing up and rejoined the conversation. “See, we realized that we’ve been too reactive. We were taking calls as they came in without appreciating the enormous potential we had here. It’s amazing that we get to help people who are being attacked, help people who need our help, but we could also make the world safer more proactively, and end up helping even more people, even better, and when we realized that, when that clicked—”
“We couldn’t just ignore it.”
Tina looked back at Clark. “Ok, so what you’re saying is that you want to save people— or help people — and you think there are better and worse ways you could approach that, but you’re not sure which are which, and you realized that instead of rushing off to fight the most immediate threat, you want to, what, do some research and find the best way you can help?”
“Yes, exactly, except, they’re not just better, we think they might be seriously better. Like, many times better. The difference between helping someone who’s being mugged, which by the way is awful, so helping them is already pretty great, but imagine if there’s a whole city somewhere that needs water or something, and there are people dying, and I could be helping them instead. It’s awful to ignore the mugging, but if I’m going there, I’m ignoring the city, and of those...”
“Basically, you’re right, Tina, yes,” said Lois.
“Ok,” Tina felt like she was missing something. “But Lois, you’re this powerful journalist, and Clark, you’re Superman. You can read at, what, 100 words per second? Doesn’t it make more sense for you to do the research? I’d need to spend hours reading about everything from food supply chains in Asia to, I dunno, environmental effects of diverting rivers or something, and you could have read all the available research on this in a week.”
“It’s true, Clark reads fast, and we were even trying to split the research up like that at some point,” said Lois. “But we also realized that the time that Clark was spending reading, even if it wasn’t very long, he could be spending chasing off the villain of the week or whatever. And I couldn’t get to all the research in time. I tried for a while, but I have a job, I need to eat, I need to unwind and watch Spanish soap operas sometimes. I was going insane. So we’ve been stuck in this trap of always addressing the most urgent thing, and we think we need help. Your help.”
“Plus, we don’t even really know what we need to find out. I don’t know which books I should be reading. It’s not even just about how to best fix the problem that’s coming up, like the best way to help that city without water. It’s also about finding new problems. We could be missing something huge.”
“You mean, you need to find the metaphorical cities without water?” Clark was nodding. Lois was tapping out another email. “And you should probably be widening your search, too. Not just looking at people specifically, or looking for cities without water, but also looking for systems to improve, ways to make people healthier. Animals, too, maybe. Aliens? Are there more of you? I’m getting off track.” Tina pulled out the tiny notebook her brother gave her and began jotting down some questions to investigate.
“So, are you in?” Lois seemed a bit impatient. Tina set the notebook aside, embarrassed for getting distracted.
“I think so. I mean, this is crazy, I need to think about it a bit. But it makes sense. And you need help. You definitely shouldn’t be working as a journalist, Clark. I mean, not that I’m an expert, really, but—”
“You kind of are. The expert.” Tina absently noted that Clark perfectly fit her mental image of a proper Kansas farm boy. He was even wearing plaid.
“If you accept the offer.” Lois said, without looking up from her email.
“That’s a terrifying thought. It feels like there should be more people helping, here. You should have someone sanity-checking things. Someone looking for flaws in my reasoning. You should maybe get a personal assistant, too— that could free up a massive amount of your time, and hopefully do a ton of good.” Tina knew she was hooked, but wanted to slow down, wanted to run this whole situation by a friend, or maybe her brother. “Can I tell someone about this? Like, is all of this secret?”
Clark shook his head. “We don’t want to isolate you from your friends or anything. But there will be things that need to be secret. And we’ve had trouble before— secrets are hard—” Clark glanced apologetically at Lois, who looked up from her frantic typing for long enough to shoot him a look, “But as much as possible, we don’t want to fall into bad patterns from the past.”
“I guess there are some dangers with information leaking. You probably have secret weaknesses, or maybe you know things that are dangerous—” Tina’s mind was swirling with new ideas and new worries. “Wait a second, how did you even find me? How do you know I’m not going to, like, tell everyone everything...”
Clark and Lois looked at each other.
“We didn’t really think that through very much. You seemed smart, and nice, and you’d started that phone-an-anonymous-friend service in college. And you wrote a good analysis when we asked you to. Sorry about the lie about the consulting job, by the way.”
“And you really need help.” Tina nodded. “Ok, we definitely need to fine-tune the hiring process. And I’ll start by writing down a list of some key questions.”
“I’ll order takeout,” said Lois, and pulled out her phone.
[I wrote and submitted this shortly before the deadline, but was somewhat overwhelmed with other stuff and didn’t post it on the Forum. I figured I’d go ahead and post it now. (Thanks to everyone who ran, participated in, or encouraged the contest by reading/commenting!]
I recently ran a quick Fermi workshop, and have been asked for notes several times since. I’ve realized that it’s not that hard for me to post them, and it might be relatively useful for someone.
Quick summary of the workshop
What is a Fermi estimate?
Walkthrough of the main steps for Fermi estimation
Notice a question
Break it down into simpler sub-questions to answer first
Don’t stress about the details when estimating answers to the sub-questions
Consider looking up some numbers
Put everything together
Sanity check
Different models: an example
Examples!
Discussion & takeaways
Resources
Guesstimate is a great website for Fermi estimation (although you can also use scratch paper or spreadsheets if that’s what you prefer)
I don’t see mention of quantifying the uncertainty in each component and aggregating this (usually via simulation). Is this not fundamental to Fermi? (Is it only a special version of Fermi, the “Monte Carlo” version?)
Uncertainty is super important, and it’s really useful to flag. It’s possible I should have brought it up more during the workshop, and I’ll consider doing that if I ever run something similar.
However, I do think part of the point of a Fermi estimate is to be easy and quick.
In practice, the way I’ll sometimes incorporate uncertainty into my Fermis is by running the numbers in three ways:
my “best guess” for every component (2 hours of podcast episode, 100 episodes),
the “worst (reasonable) case” for every component (only 90? episodes have been produced, and they’re only 1.5 hours long, on average), and
the “best case” for every component (150 episodes, average of 3 hours).
Then this still takes very little time and produces a reasonable range: ~135 to 450 hours of podcast (with a best guess of 200 hours). (Realistically, if I were taking enough care to run the numbers 3 times, I’d probably put more effort into the “best guess” numbers I produced.) I also sometimes do something similar with a spreadsheet/more careful Fermi.
I could do something more formal with confidence intervals and the like, and it’s truly possible I should be doing that. But I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time, or to see if there are big obvious differences that are being missed because the natural components being considered are clunky and incompatible (before they’re put together to produce the numbers we actually care about).
Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.
I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time
I agree, but making uncertainty explicit makes it even better. (And I think it’s an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.
I could do something more formal with confidence intervals and the like
I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.
(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)
“Worst case all parameters” is very unlikely. So is “best case everything”.
See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.
Moderation update: We have indefinitely banned 8 accounts[1] that were used by the same user (JamesS) to downvote some posts and comments from Nonlinear and upvote critical content about Nonlinear. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
Was emerson_fartz an acceptable username in the first place? (It may not have had a post history in which case no one may have noticed its existence before the sockpuppeting detection, but that sounds uncivil toward a living person)
We have strong reason to believe that Torres (philosophytorres) used a second account to violate their earlier ban. We feel that this means that we cannot trust Torres to follow this forum’s norms, and are banning them for the next 20 years (until 1 October 2042).
LukeDing (and their associated alt account) has been banned for six months, due to voting & multiple-account-use violations. We believe that they voted on the same comment/post with two accounts more than two hundred times. This includes several instances of using an alt account to vote on their own comments.
LukeDing appealed the decision; we will reach out to them and ask them if they’d like us to feature a response from them under this comment.
As some of you might realize, some people on the moderation team have conflicts of interest with LukeDing, so we wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed weren’t aware of LukeDing’s identity (they only saw anonymized information).
Is more information about the appellate process available? The guide to forum norms says “We’re working on a formal process for reviewing submissions to this form, to make sure that someone outside of the moderation team will review every submission, and we’ll update this page when we have a process in place.”
The basic questions for me would include: information about who decides appeals, how much deference (if any) the adjudicator will give to the moderators’ initial decision—which probably should vary based on the type of decision at hand, and what kind of contact between the mods and appellate adjudicator(s) is allowed. On the last point, I would prefer as little ex parte contact if possible, and would favor having an independent vetted “advocate for the appellant” looped in if there needs to be contact to which the appellant is not privy.
Admittedly I have a professional bias toward liking process, but I would err on the side of more process than less where accounts are often linked to real-world identities and suspensions are sometimes for conduct that could be seen as dishonest or untrustworthy. I would prefer public disclosure of an action taken in cases like this only after the appellate process is complete for the same reasons, assuming the user timely indicates a desire to appeal the finding of a norm violation.
Finally, I commend keeping the moderators deciding whether a violation occurred blinded as to the user’s identity as a best practice in cases like this, even where there are no COIs. It probably should be revealed prior to determining a sanction, though.
I would prefer public disclosure of an action taken in cases like this only after the appellate process is complete for the same reasons, assuming the user timely indicates a desire to appeal the finding of a norm violation.
It does intuitively seem like an immediate temporary ban, made public only after whatever appeals are allowed have been exhausted, should give the moderation team basically everything they need while being more considerate of anyone whose appeals are ultimately upheld (i.e. innocent, or mitigating circumstances).
Moderation update: A new user, Bernd Clemens Huber, recently posted a first post (“All or Nothing: Ethics on Cosmic Scale, Outer Space Treaty, Directed Panspermia, Forwards-Contamination, Technology Assessment, Planetary Protection, (and Fermi’s Paradox)”) that was a bit hard to make sense of. We hadn’t approved the post over the weekend and hadn’t processed it yet, when the Forum team got an angry and aggressive email today from the user in question calling the team “dipshits” (and providing a definition of the word) for waiting throughout the weekend.
If the user disagrees with our characterization of the email, they can email us to give permission for us to share the whole thing.
We have decided that this is not a promising start to the user’s interactions on the Forum, and have banned them indefinitely. Please let us know if you have concerns, and as a reminder, here are the Forum’s norms.
We have strong reason to believe that Charles He used multiple new accounts to violate his earlier 6-month-long ban. We feel that this means that we cannot trust Charles He to follow this forum’s norms, and are banning him from the Forum for the next 10 years (until December 20, 2032).
We have already issued temporary suspensions to several suspected duplicate accounts, including one which violated norms about rudeness and was flagged to us by multiple users. We will be extending the bans for each of these accounts to mirror Charles’s 10-year ban, but are giving the users an opportunity to message us if we have made any of those temporary suspensions in error (and have already reached out to them). While we aren’t >99% certain about any single account, we’re around 99% that at least one of these is Charles He.
I find this reflects worse on the mod team than Charles. This is nowhere near the first time I’ve felt this way.
Fundamentally, it seems the mod team heavily prioritizes civility and following shallow norms above enabling important discourse. The post on forum norms says a picture of geese all flying in formation and in one direction is the desirable state of the forum; I disagree that this is desirable. Healthy conflict is necessary to sustain a healthy community. Conflict sometimes entails rudeness. Some rudeness here and there is not a big deal and does not need to be stamped out entirely. This also applies to the people who get banned for criticizing EA rudely, even when they’re criticizing EA for its role in one of the great frauds of modern history. Banning EA critics for minor reasons is a short-sighted move at best.
Banning Charles for 10 years (!!) for the relatively small crime of evading a previous ban is a seriously flawed idea. Some of his past actions like doxxing someone (without any malice I believe) are problematic and need to be addressed, but do not deserve a 10 year ban. Some of his past comments, especially farther in the past, have been frustrating and net-negative to me, but these negative actions are not unrelated to some of his positive traits, like his willingness to step out of EA norms and communicate clearly rather than like an EA bot. The variance of his comments has steadily decreased over time. Some of his comments are even moderator-like, such as when he warned EA forum users not to downvote a WSJ journalist who wasn’t breaking any rules. I note that the mod team did not step in there to encourage forum norms.
I also find it very troubling that the mod team has consistent and strong biases in how it enforces its norms and rules, such as not taking any meaningful action against an EA in-group member for repeated and harmful violations of norms but banning an EA critic for 20 years for probably relatively minor and harmless violations. I don’t believe Charles would have received a similar ban if he was an employee of a brand name EA org or was in the right social circles.
Finally, as Charles notes, there should be an appeals process for bans.
the relatively small crime of evading a previous ban
I don’t think repeatedly evading moderator bans is a “relatively small crime”. If Forum moderation is to mean anything at all, it has to be consistently enforced, and if someone just decides that moderation doesn’t apply to them, they shouldn’t be allowed to post or comment on the Forum.
Charles only got to his 6 month ban via a series of escalating minor bans, most of which I agreed with. I think he got a lot of slack in his behaviour because he sometimes provided significant value, but sometimes (with insufficient infrequency) behaved in ways that were seriously out of kilter with the goal of a healthy Forum.
I personally think the 10-year thing is kind of silly and he should just have been banned indefinitely at this point, then maybe have the ban reviewed in a little while. But it’s clear he’s been systematically violating Forum policies in a way that requires serious action.
The post on forum norms says a picture of geese all flying in formation and in one direction is the desirable state of the forum; I disagree that this is desirable.
Indefinite suspension with leave to seek reinstatement after a stated suitable period would have been far preferable to a 10-year ban. A tenner isn’t necessary to vindicate the moderators’ authority, and the relevant conduct doesn’t give the impression of someone for whom the passage of ten years’ time is necessary before there is a reasonable probability that would they have become a suitable participant during the suspension.
It makes a lot of difference to me that Charles’ behavior was consistently getting better. If someone consistently flouts norms without any improvement, at some point they should be indefinitely banned. This is not the case with Charles. He started off with really high variance and at this point has reached a pretty tolerable amount. He has clearly worked on his actions. The comments he posted while flouting the mods’ authority generally contributed to the conversation. There are other people who have done worse things without action from the mod team. Giving him a 10 year ban without appeal for this feels more motivated by another instance of the mod team asserting their authority and deciding not to deal with messiness someone is causing than a principled decision.
I think this is probably true. I still think that systematically evading a Forum ban is worse behaviour (by which I mean, more lengthy-ban-worthy) than any of his previous transgressions.
There are other people who have done worse things without action from the mod team.
I am not personally aware of any, and am sceptical of this claim. Open to being convinced, though.
Totally unrelated to the core of the matter, but do you intend to turn this into a frontpage post? I’m a bit inclined to say it’d be better for transparency, and to inform others about the bans, and deter potential violators.… but I’m not sure, maybe you have a reason for preferring the shortform (or you’ll publish periodical updates on the frontpage
In other forums and situations, there is a grace period where a user can comment after receiving a very long ban. I think this is a good feature that has several properties with long term value.
We have strong reason to believe that Charles He used multiple new accounts to violate his earlier 6-month-long ban.
These accounts are some of these accounts I created (but not all[1]):
Here are some highlights of some of the comments made by the accounts, within about a 30 day period.
Pointing out the hollowness of SBF’s business, which then produced a follow up comment, which was widely cited outside the forum, and may have helped generate a media narrative about SBF.
My alternate accounts were created successively, as they were successively banned. This was the only reason for subterfuge, which I view as distasteful.
I have information on the methods that the CEA team used to track my accounts (behavioral telemetry, my residential IP). This is not difficult to defeat. Not only did I not evade these methods, but I gave information about my identity several times (resulting in a ban each time). These choices, based on my distaste, is why the CEA team is “99% certain” (and at least, in a mechanical sense) why I have this 10 year ban.
We feel that this means that we cannot trust Charles He to follow this forum’s norms, and are banning him from the Forum for the next 10 years (until December 20, 2032).
I believe I am able to defend each of the actions on my previous bans individually (but never have before this). More importantly, I always viewed my behavior as a protest.
At this point, additional discussions are occurring by CEA[1], such as considering my ban from EAG and other EA events. By this, I’ll be joining blacklists of predators and deceivers.
As shown above, my use of alternate accounts did not promote or benefit myself in any way (even setting aside expected moderator action). Others in EA have used sock puppets to try to benefit their orgs, and gone on to be very successful.
Note that the moderator who executed the ban above, is not necessarily involved in any way in further action or policy mentioned in my comments. Four different CEA staff members have reached out or communicated to me in the last 30 days.
Moderation update: We have banned “Richard TK” for 6 months for using a duplicate account to double-vote on the same posts and comments. We’re also banning another account (Anin, now deactivated), which seems to have been used by that same user or by others to amplify those same votes. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
We’re issuing [Edit: identifying information redacted] a two-month ban for using multiple accounts to vote on the same posts and comments, and in one instance for commenting in a thread pretending to be two different users. [Edit: the user had a total of 13 double-votes, most far apart and are likely accidental, two upvotes close together on others’ posts (which they claim are accidental as well), but two cases of deliberate self upvote from alternative accounts]
This is against the Forumnorms around using multiple accounts. Votes are really important for the Forum: they provide feedback to authors and signal to readers what other users found most valuable, so we need to be particularly strict in discouraging this kind of vote manipulation.
A note on timing: the comment mentioned above is 7 months old but went unnoticed at the time, a report for it came in last week and triggered this investigation.
If [Edit: redacted] thinks that this is not right, he can appeal. As a reminder, bans affect the user, not the account.
[Edit: We have retroactively decided to redact the user’s name from this early message, and are currently rethinking our policies on the matter]
Do suspended users get a chance to make a public reply to the mod team’s findings? I don’t think that’s always necessary—e.g., we all see the underlying conduct when public incivility happens—but I think it’s usually warranted when the findings imply underhanded behavior (“pretending”) and the underlying facts aren’t publicly observable. There’s an appeal process, but that doesn’t address the public-reputation interests of the suspended person.
It’s kind of jarring to read that someone has been banned for “violating a norm”—that word to me implies that they’re informal agreements between the community. Why not call them “rules”?
pinkfrog (and their associated account) has been banned for 1 month, because they voted multiple times on the same content (with two accounts), including upvoting pinkfrog’s comments with their other account. To be a bit more specific, this happened on one day, and there were 12 cases of double-voting in total (which we’ll remove). This is against our Forum norms on voting and using multiple accounts.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.
Multiple people on the moderation team have conflicts of interest with pinkfrog, so I wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed aren’t aware of pinkfrog’s real identity (they only saw anonymized information).
It seems inconsistent to have this info public for some, and redacted for others. I do think it is good public service to have this information public, but am primarily pushing here for consistency and some more visibility around existing decisions.
Agree. It seems potentially pretty damaging to people’s reputations to make this information public (and attached to their names); that strikes me as a much bigger penalty than the bans. There should, at a minimum, be a consistent standard, and I’m inclined to think that standard should be having a high bar for releasing identifying information.
I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there’s a case to be made when the information is cherry-picked or biased, or there’s no opportunity to hear a fair response. But goodness, if we’ve learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.
I would guess that most people engage in private behavior that would be reputationally damaging if the internet were to find out about it. Just because something is true doesn’t mean you forfeit your rights to not have that information be made public.
I think people might reasonably (though wrongly) assume that forum mods are not monitoring accounts at this level of granularity, and thus believe that their voting behavior is private. Given this, I think mods should warn before publicly censoring. (Just as it would be better to inform your neighbor that you can see them doing something embarrassing through their window before calling the police or warning other people about then—maybe they just don’t realize you can see, and telling them is all they need to not do the thing anymore, which, after all, is the goal.)
Frankly, I don’t love that mods are monitoring accounts at this level of granularity. (For instance, knowing this would make me less inclined to put remotely sensitive info in a forum dm.)
Writing in a personal capacity; I haven’t run this by other mods.
Hi, just responding to these parts of your comment:
I think people might reasonably (though wrongly) assume that forum mods are not monitoring accounts at this level of granularity, and thus believe that their voting behavior is private.
...
Frankly, I don’t love that mods are monitoring accounts at this level of granularity. (For instance, knowing this would make me less inclined to put remotely sensitive info in a forum dm.)
We include some detail on what would lead moderators to look into a user’s voting activity, and what information we have access to, on our “Guide to norms on the Forum” page:
Voting activity is generally private (even admins don’t know who voted on what), but if we have reason to believe that someone is violating norms around voting (e.g. by mass-downvoting many of a different user’s comments and posts), we reserve the right to check what account is doing this. If we suspect that someone is using multiple accounts to vote on the same post, we also reserve the right to check whether the accounts are related, and check their voting history.
...
The following information is accessible to moderators but will only be used to identify behavior such as “sockpuppet” accounts and mass downvoting, in situations where we have strong reason to believe that an account is used to get around a ban (or other restriction), or in the case of severe safety concerns. The moderators will not view or use this information for any other purpose.
The IP address a post/comment came from
The voting history of users
The identity of voters on any given post/comment
(In addition, note that moderators can’t just go into a user’s account and check their voting history even when we do have reason to look into that user. We require one of the Forum engineers to run some queries on the back end to yield this information.)
Finally, to address your concern about direct messages on the Forum: like a regular user, a moderator cannot see into anyone else’s messages.
Thanks for writing this! To clarify a few points even more:
moderators can’t just go into a user’s account and check their voting history even when we do have reason to look into that user. We require one of the Forum engineers to run some queries on the back end to yield this information.
I confirm this, and just want to highlight that
this is pretty rare; we have a high bar before asking developers to look into patterns
usually, one developer looks into things, and shares anonymized data with moderators, who then decide whether it needs to be investigated more deeply
If so, a subset of moderators gets access to deanonymized data to make a decision and contact/warn/ban the user(s)
On
like a regular user, a moderator cannot see into anyone else’s direct messages.
I confirm this, but I want to highlight that messages on the forum are not end-to-end encrypted and are, by default, sent via email as well (i.e. when you get a message on the forum you also get an email with the message). So forum developers and people who have or will have access to the recipient’s email inbox, or the forum’s email delivery service, can see the messages.
For very private communications, I would recommend using privacy-first end-to-end encrypted platforms like Signal.
Thanks; this is helpful and reassuring, especially re: the DMs. I had read this section of the norms page, and it struck me that the “if we have reason to believe that someone is violating norms around voting” clause was doing a lot of work. I would appreciate more clarification about what would lead mods to believe something like this (and maybe some examples of how you’ve come to have such beliefs). But this is not urgent, and thanks for the clarification you’ve already provided.
Yeah, this is a reasonable thing to ask. So, the “if we have reason to believe that someone is violating norms around voting” clause is intentionally vague, I believe, because if we gave more detail on the kinds of checks/algorithms we have in place for flagging potential violations, then this could help would-be miscreants commit violations that slip past our checks.
(I’m a bit sad that the framing here is adversarial, and that we can’t give users like you more clarification, but I think this state of play is the reality of running an online forum.)
If it helps, though, the bar for looking into a user’s voting history is high. Like, on average I don’t think we do this more than once or twice per month.
Thanks, this is also helpful! One thing to think about (and no need to tell me), is whether making the checks public could effectively disincentivize the bad behavior (like how warnings about speed cameras may as effectively disincentivize speeding as the cameras do themselves). But if there are easy workarounds, I can see why this wouldn’t be viable.
Just because something is true doesn’t mean you forfeit your rights to not have that information be made public.
I agree that not all true things should be made public, but I think when it specifically pertains to wrongdoing and someone’s trustworthiness, the public interest can override the right to privacy. If you look into your neighbour’s window and you see them printing counterfeit currency, you go to the police first, rather than giving them an opportunity to simply hide their fraud better.
Maybe the crux is: I think forum users upvoting their own comments is more akin to them Facetuning dating app photos than printing counterfeit currency. Like, this is pretty innocuous behavior and if you just tell people not to do it, they’ll stop.
It seems like we disagree on how bad it is to self-vote (I don’t think it’s anywhere near the level of “actual crime”, but I do think it’s pretty clearly dishonest and unfair, and for such a petty benefit it’s hard for me to feel sympathetic to the temptation).
But I don’t think it’s the central point for me. If you’re simultaneously holding that:
this information isn’t actually a big deal, but
releasing this publically would cause a lot of harm through reputational damage,
then there’s a paternalistic subtext where people can’t be trusted to come to the “right” conclusions from the facts. If this stuff really wasn’t a big deal, then talking about it publically wouldn’t be a big deal either. I don’t think people should be shunned forever and excluded from any future employment because they misused multiple accounts on the forum. I do think they should be a little embarrassed, and I don’t think that moving to protect them from that embarrassment is actually a kindness from a community-wide perspective.
I feel like this is getting really complicated and ultimately my point is very simple: prevent harmful behavior via the least harmful means. If you can get people to not vote for themselves by telling them not to, then just… do that. I have a really hard time imagining that someone who was warned about this would continue to do it; if they did, it would be reasonable to escalate. But if they’re warned and then change their behavior, why do I need to know this happened? I just don’t buy that it reflects some fundamental lack of integrity that we all need to know about (or something like this).
I think that posting that someone is banned and why they were banned is not mainly about punishing them. It’s about helping people understand what the moderation team is doing, how rule-breaking is handled, and why someone no longer has access to the forum. For example, it helps us to understand if the moderation team are acting on inadequate information, or inconsistently between different people. The fact that publishing this information harms people is an unfortunate side effect, after the main effect of improving transparency and keeping people informed.
It doesn’t even really feel right to call them harmed by the publication. If people are harmed by other people knowing they misuse the voting system, I’d say they were mainly harmed by their own misuse of the system, not by someone reporting on it.
I just don’t buy that it reflects some fundamental lack of integrity that we all need to know about
Then you needn’t object to the moderation team talking about what they did!
It’s about helping people understand what the moderation team is doing, how rule-breaking is handled, and why someone no longer has access to the forum.
It’s unclear to me that naming names materially advances the first two goals. As to the third, the suspended user could have the option of having their name disclosed. Otherwise, I don’t think we’re entitled to an explanation of why a particular poster isn’t active anymore.
There’s also the interest in deterring everyone else from doing it (general deterrence), not just in getting these specific people to stop doing it (specific deterrence). While I have mixed feelings about publicly naming offenders, the penalty does need to sting enough to make the benefits of the offense not worth the risk of getting caught. A private warning with no real consequences might persuade the person violating the rules not to do it again, but double-voting would surge as people learned you get a freebie.
“double-voting would surge as people learned you get a freebie.”
I just don’t see this happening?
Separately, one objection I have to cracking down hard on self-voting is that I think this is not very harmful relative to other ways in which people don’t vote how they’re “supposed to.” E.g., we know the correlation between upvotes and agree votes is incredibly high, and downvoting something solely because you disagree with it strikes me as more harmful to discourse on the forum than self-voting. I think the reason self-voting gets highlighted isn’t because it’s especially harmful, it’s just because it’s especially catchable.
If the mods want to improve people’s voting behavior on the forum, I both wish they’d target different voting behavior (ie, the agree/upvoting correlation) and use different means to do it (ie, generating reports for people of their own voting correlations, whether they tend to upvote/downvote certain people, etc), rather than naming/shaming people for self-voting.
I think it’s more that upvoting your own posts from an alt is (1) willful, intentional behavior (2) aimed at deceiving the community about the level of support of a comment (3) for the person’s own benefit. Presumably, most people who are doing it are employing some sort of means to evade detection, which adds another layer of deceptiveness. While I don’t like downvoting-for-disagreement and the like either, that kind of behavior presumptively reflects a natural cognitive bias rather than any of the three characteristics listed above. It is for those reasons that—in my view—downvoting-for-disagreement is generally not the proper subject of a sanctioning system,[1] while self-upvoting is.
I’ve suggested to the mods before that sanctions should sometimes be more carefully tailored to the offense, so I’d be open to the view that consequences like permanently denying the violator’s ability to vote and their ability to use alts might be more tailored to the offense than public disclosure. Those are the specific functions which they have demonstrated an inability to handle responsibly. Neither function is so fundamental to the ability to use the Forum that the mods should feel obliged to expend their time deciding if the violator has rehabilitated themselves enough to restore those privileges.
There could be circumstances in which soft-norm violative behavior was so extreme that sanctions should be considered. However, unlike “don’t multi-vote” (which is a bright-line rule for which the violator should be perfectly aware that they are violating the rules), these norms are less clearcut—so privately reaching out to the person would be the appropriate first action in a case like that.
Fair point about reputational harms being worse and possibly too punishing in some cases. I think in terms of a proposed standard it might be worth differentiating (if possible) between e.g. careless errors, or momentary lapses in judgement that were quickly rectified and likely caused no harm in expectation, versus a pattern of dishonest voting intended to mislead the EAF audience, and especially if they or an org that they work for stand to gain from it, or the comments in question are directly harmful to another org. In these latter cases the reputational harm may be more justifiable.
For reasoning transparency / precedent development, it might be worthwhile to address two points:
(1) I seem to remember other multivoting suspensions being much longer than 1 month. I had gotten the impression that the de facto starting point for deliberate multiaccount vote manipulation was ~ six months. Was the length here based on mitigating factors, perhaps the relatively low number of violations and that they occurred on a single day? If the usual sanction is ~ six months, I think it would be good to say that here so newer users understand that multivoting is a really big deal.
(2) Here the public notice names the anon account pinkfrog (which has 3 comments + 50 karma), rather than the user’s non-anon account. The last multi account voting suspension I saw named the user’s primary account, which was their real name. Even though the suspension follows the user, which account is publicly named can have a significant effect on public reputation. How does the mod team decide which user to name in the public notice?
pinkfrog: 1 month (12 cases of double voting) LukeDing: 6 months (>200 times) JamesS: indefinite (8 accounts, number not specified) [Redacted]: 2 months (13 double votes, most are “likely accidental”, two “self upvotes”) RichardTK: 6 months (number not specified)
Charles He: 10 years (not quite analogous as these are using alts to circumvent initial bans, included other violations) Torres: 20 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
(Written in a personal capacity, I did not check this with other moderators)
Thank you for the feedback! I didn’t want to go too off-topic, as this is unrelated to this post, so I’m replying here, but I want to quickly share some factual information for other readers.
The post itself obviously violates forum norms and the moderators are defending the post
You’re writing this in multiple comments. I want to make it clear that moderators did not endorse or “defend” (or symmetrically “attack”) the post as moderators. But of course, we do comment as users on parts we agree or disagree with (like any other user). Let us know if it’s not clear whether we’re commenting as users or as moderators.
As for your other warnings, I want to make sure other readers know that your last warning was not for discussing a specific topic, but for being uncivil and not constructive to the discussion. I agree that the situation in the first warning is less relevant to this case, apologies for bringing it up.
Just a quick note to say that we’ve removed a post sharing a Fermi estimate of the chances that the author finds a partner who matches their preferred characteristics and links to a date-me doc.
The Forum is for discussions about improving the world, and a key norm we highlight is “Stay on topic.” This is not the right space for coordinating dating. (Consider exploring LessWrong, ACX threads/classifieds, or EA-adjacent Facebook/Reddit/Discord groups for discussions that are primarily social.)
We’re not taking any other action about the author, although I’ve asked them to stay on topic in the future.
Around a month ago, a post about the authorship of Democratising Risk got published. This post got taken down by its author. Before this happened, the moderation team had been deciding what to do with some aspects of the post (and the resulting discussion) that had violated Forum norms. We were pretty confident that we’d end up banning two users for at least a month, so we banned them temporarily while we sorted some things out.
One of these users was Throwaway151. We banned them for posting something a bit misleading (the post seemed to overstate its conclusions based on the little evidence it had, and wasn’t updated very quickly based on clear counter-evidence), and being uncivil in the comments. Their ban has passed, now. As a reminder, bans affect the user, not the account, so any other accounts Throwaway151 operated were also affected. The other user was philosophytorres — see the relevant update.
Quick update: we’ve banned Defacto, who we have strong reason to believe is another sockpuppet account for Charles He. We are extending Charles’s ban to be indefinite (he and others can appeal if they want to).
We’ve banned Vee from the Forum for 1 year. Their content seems to be primarily or significantly AI-generated,[1] and it’s not clear that they’re using it to share thoughts they endorse and have carefully engaged with. (This had come up before on one of their posts.) Our current policy on AI-generated content makes it clear that we’ll be stricter when moderating AI-generated content. Vee’s content doesn’t meet the standards of the Forum.
If Vee thinks that this is not right, they can appeal. If they come back, we’ll be checking to make sure that their content follows Forum norms. As a reminder, bans affect the user, not the account.
Different detectors for AI content are giving this content different scores, but we think that this is sufficiently likely true to act on.
It’s hard to be certain that something is AI-generated, and I’m not very satisfied with our processes or policies on this front. At the same time, the increase in the number of bots has made dealing with spam or off-topic/troll contributions harder, and I think that waiting for something closer to certainty will have costs that are too high.
Moderation update: I’m indefinitely banning JasMaguire for an extremely racist comment that has since been deleted. We’ll likely revisit and update our forum norms to explicitly discourage this sort of behavior.
Moderation update: We’re issuing dstudiocode a one-month ban for breaking Forum norms in their recent post and subsequent behavior. Specifically:
Posting content that could be interpreted as promoting violence or illegal activities.
The post in question, which asked whether murdering meat-eaters could be considered “ethical,” crosses a line in terms of promoting potential violence.
As a reminder, the ban affects the user, not just the account. During their ban period, the user will not be permitted to rejoin the Forum under another account name. If they return to the Forum after the ban period, we’ll expect a higher standard of norm-following and compliance with moderator instructions.
You can reach out to forum-moderation@effectivealtruism.org with any questions. You can appeal this decision here.
I suggest editing this comment to note the partial reversal on appeal and/or retracting the comment, to avoid the risk of people seeing only it and reading it as vaguely precedential.
Strong +1 to Richard, this seems a clear incorrect moderation call and I encourage you to reverse it.
I’m personally very strongly opposed to killing people because they eat meat, and the general ethos behind that. I don’t feel in the slightest offended or bothered by that post, it’s just one in a string of hypothetical questions, and it clearly is not intended as a call to action or to encourage action.
If the EA Forum isn’t somewhere where you can ask a perfectly legitimate hypothetical question like that, what are we even doing here?
The moderators have reviewed the decision to ban @dstudioscode after users appealed the decision. Tl;dr: We are revoking the ban, and are instead rate-limiting dstudioscode and warning them to avoid posting content that could be perceived as advocating for major harm or illegal activities. The rate limit is due to dstudiocode’s pattern of engagement on the Forum, not simply because of their most recent post—for more on this, see the “third consideration” listed below.
More details:
Three moderators,[1] none of whom was involved in the original decision to ban dstudiocode, discussed this case.
The first consideration was “Does the cited norm make sense?” For reference, the norm cited in the original ban decision was “Materials advocating major harm or illegal activities, or materials that may be easily perceived as such” (under “What we discourage (and may delete or edit out)” in our “Guide to norms on the Forum”). The panel of three unanimously agreed that having some kind of Forum norm in this vein makes sense.
The second consideration was “Does the post that triggered the ban actually break the cited norm?” For reference, the post ended with the question “should murdering a meat eater be considered ‘ethical’?” (Since the post was rejected by moderators, users cannot see it.[2] We regret the confusion caused by us not making this point clearer in the original ban message.)
There was disagreement amongst the moderators involved in the appeal process about whether or not the given post breaks the norm cited above. I personally think that the post is acceptable since it does not constitute a call to action. The other two moderators see the post as breaking the norm; they see the fact that it is “just” a philosophical question as not changing the assessment.[3] (Note: The “meat-eater problem” hasbeendiscussed elsewhere on the Forum. Unlike the post in question, in the eyes of the given two moderators, these posts did not break the “advocating for major harm or illegal activities” norm because they framed the question as about whether to donate to save the life of a meat-eating person, rather than as about actively murdering people.)
Amongst the two appeals-panelist moderators who see the post as norm-breaking, there was disagreement about whether the correct response would be a temporary ban or just a warning.
The third consideration was around dstudiocode’s other actions and general standing on the Forum. dstudiocode currently sits at −38 karma following 8 posts and 30 comments. This indicates that their contributions to the discourse have generally not been helpful.[4] Accordingly, all three moderators agreed that we should be more willing to (temporarily) ban dstudiocode for a potential norm violation.
dstudiocode has also tried posting very similar, low-quality (by our lights) content multiple times. The post that triggered the ban was similar to, though more “intense” than, this other post of theirs from five months ago. Additionally, they tried posting similar content through an alt account just before their ban. When a Forum team member asked them about their alt, they appeared to lie.[5] All three moderators agreed that this repeated posting of very similar, low-quality content warrants at least a rate limit (i.e., a cap on how much the user in question can post or comment).[6] (For context, eight months ago, dstudiocode published five posts inaneight-dayspan, all of which were low quality, in our view. We would like to avoid a repeat of that situation: a rate limit or a ban are the tools we could employ to this end.) Lying about their alt also makes us worried that the user is trying to skirt the rules.
Overall, the appeals panel is revoking dstudiocode’s ban, and is replacing the ban with a warning (instructing them to avoid posting content that could be perceived as advocating for major harm of illegal activities) and a rate limit. dstudiocode will be limited to at most one comment every three days and one post per week for the next three weeks—i.e., until when their original ban would have ended. Moderators will be keeping an eye on their posting, and will remove their posting rights entirely if they continue to publish content that we consider sufficiently low quality or norm-bending.
We would like to thank @richard_ngo and @Neel Nanda for appealing the original decision, as well as @Jason and @dirk for contributing to the discussion. We apologize that the original ban notice was rushed, and failed to lay out all the factors that went into the decision.[7] (Reasoning along the lines of the “third consideration” given above went into the original decision, but we failed to communicate that.)
If anyone has questions or concerns about how we have handled the appeals process, feel free to comment below or reach out.
Technically, two moderators and one moderation advisor. (I write “three moderators” in the main text because that makes referring to them, as I do throughout the text, less cumbersome.)
The three of us discussed whether or not to quote the full version of the post that triggered the ban in this moderator comment, to allow users to see exactly what is being ruled on. By split decision (with me as the dissenting minority), we have decided not to do so: in general, we will probably avoid republishing content that is objectionable enough to get taken down in the first place.
I’m not certain, but my guess is that the disagreement here is related to the high vs. low decoupling spectrum (where high decouplers, like myself, are fine with entertaining philosophical questions like these, whereas low decouplers tend to see such questions as crossing a line).
We don’t see karma as a perfect measure of a user’s value by any means, but we do consider a user’s total karma being negative to be a strong signal that something is awry.
Looking through dstudiocode’s post and comment history, I do think that they are trying to engage in good faith (as opposed to being a troll, say). However, the EA Forum exists for a particular purpose, and has particular standards in place to serve that purpose, and this means that the Forum is not necessarily a good place for everyone who is trying to contribute. (For what it’s worth, I feel a missing mood in writing this.)
In response to our request that they stop publishing similar content from multiple accounts, they said: “Posted from multiple accounts? I feel it is possible that the same post may have been created because maybe the topic is popular?” However, we are >99% confident, based on our usual checks for multiple account use, that the other account that tried to publish this similar content is an alt controlled by them. (They did subsequently stop trying to publish from other accounts.)
We do not have an official policy on rate limits, at present, although we have used rate limits on occasion. We aim to improve our process here. In short, rate limits may be a more appropriate intervention than bans are for users who aren’t clearly breaking norms, but who are nonetheless posting low-quality content or repeatedly testing the edges of the norms.
Notwithstanding the notice we published, which was a mistake, I am not sure if the ban decision itself was a mistake. It turns out that different moderators have different views on the post in question, and I think the difference between the original decision to ban and the present decision to instead warn and rate limit can mostly be chalked up to reasonable disagreement between different moderators. (We are choosing to override the original decision since we spent significantly longer on the review, and we therefore have more confidence in the review decision being “correct”. We put substantial effort into the review because established users, in their appeal, made some points that we felt deserved to be taken seriously. However, this level of effort would not be tenable for most “regular” moderation calls—i.e., those involving unestablished or not-in-great-standing users, like dstudiocode—given the tradeoffs we face.)
I appreciate the thought that went into this. I also think that using rate-limits as a tool, instead of bans, is in general a good idea. I continue to strongly disagree with the decisions on a few points:
I still think including the “materials that may be easily perceived as such” clause has a chilling effect.
I also remember someone’s comment that the things you’re calling “norms” are actually rules, and it’s a little disingenuous to not call them that; I continue to agree with this.
The fact that you’re not even willing to quote the parts of the post that were objectionable feels like an indication of a mindset that I really disagree with. It’s like… treating words as inherently dangerous? Not thinking at all about the use-mention distinction? I mean, here’s a quote from the Hamas charter: “There is no solution for the Palestinian question except through Jihad.” Clearly this is way way more of an incitement to violence than any quote of dstudiocode’s, which you’re apparently not willing to quote. (I am deliberately not expressing any opinion about whether the Hamas quote is correct; I’m just quoting them.) What’s the difference?
“They see the fact that it is “just” a philosophical question as not changing the assessment.” Okay, let me now quote Singer. “Human babies are not born self-aware, or capable of grasping that they exist over time. They are not persons… the life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee.” Will you warn/ban me from the EA forum for quoting Singer, without endorsing that statement? What if I asked, philosophically, “If Singer were right, would it be morally acceptable to kill a baby to save a dog’s life?” I mean, there are whole subfields of ethics based on asking about who you would kill in order to save whom (which is why I’m pushing on this so strongly: the thing you are banning from the forum is one of the key ways people have had philosophical debates over foundational EA ideas). What if I defended Singer’s argument in a post of my own?
As I say this, I feel some kind of twinge of concern that people will find this and use it to attack me, or that crazy people will act badly inspired by my questions. I hypothesize that the moderators are feeling this kind of twinge more generally. I think this is the sort of twinge that should and must be overridden, because listening to it means that your discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest. You can’t figure out true things in that situation.
(On a personal level, I apologize to the moderators for putting them in difficult situations by saying things that are deliberately in the grey areas of their moderation policy. Nevertheless I think it’s important enough that I will continue doing this. EA is not just a group of nerds on the internet any more, it’s a force that shapes the world in a bunch of ways, and so it is crucial that we don’t echo-chamber ourselves into doing crazy stuff (including, or especially, when the crazy stuff matches mainstream consensus). If you would like to warn/ban me, then I would harbor no personal ill-will about it, though of course I will consider that evidence that I and others should be much more wary about the quality of discourse on the forum.)
I’m pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There’s no clear and unbiased way to decide which of those individuals and groups could be the target of “philosophical questions” about the desirability of murdering them and which could not. Unless we’re going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).
And I think drawing the line at we’re not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]
By “discernable people,” I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem).
And I think drawing the line at we’re not going to allow hypotheticals about murdering discernible people
Do you think it is acceptable to discuss the death penalty on the forum? Intuitively this seems within scope—historically we have discussed criminal justice reform on the forum, and capital punishment is definitely part of that.
If so, is the distinction state violence vs individual violence? This seems not totally implausible to me, though it does suggest that the offending poster could simply re-word their post to be about state-sanctioned executions and leave the rest of the content untouched.
I’ve weak karma downvoted and disagreed with this, then hit the “insightful” button. Definitely made me think and learn.
I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I don’t think this particular EA forum is the best place for them, for a few reasons.
1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted...
”Online discussions can turn that way with a few misinterpretations creating a doom loop that wouldn’t happen with a handshake and a drink”
Or alternatively perhaps in a more academic/narrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so there’s a very wide range of norms and understandings.
2) There’s potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now I’m strawmanning a bit :D)
3) The forum might be an entry-point for some people into EA things. I don’t think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet.
4) It might be a bit of a strawman to say our “discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest.” I think people hostile to EA don’t like many things said here on the forum, but we aren’t forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
5) I think if you (like Singer) write your own opinion in their own book its a different situation—you are the one writing and take full responsibility for your work—on a public forum it at least feels like there is at least a smidgeon of shared accountability for what is said. Forms of this debate that has been going on for sometime about what is posted on Twitter / Facebook etc.
6) I agree with you the quote from the Hamas charter is more dangerous—and think we shouldn’t be publishing or discussing that on the forum either.
I have great respect for these free speech arguments, and think this is a super hard question where the “best” thing to do might well change a lot over time, but right now I don’t think allowing these discussions and arguments on this particular EA forum will lead to more good in the long run.
I think there are better places to have these often awkward, fraught conversations.
You are literally talking about the sort of conversations that created EA. If people don’t have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that we’re broadly right about stuff, and mostly just need to keep our heads down and do the work. I personally think that a version of EA that doesn’t have the ability to course-correct in big ways would be net negative for the world. In general it is not possible to e.g. identify ongoing moral catastrophes when you’re optimizing your main venue of conversations for avoiding seeming weird.
I agree with you the quote from the Hamas charter is more dangerous—and think we shouldn’t be publishing or discussing that on the forum either.
If you’re not able to talk about evil people and their ideologies, then you will not be able to account for them in reasoning about how to steer the world. I think EA is already far too naive about how power dynamics work at large scales, given how much influence we’re wielding; this makes it worse.
There’s potential reputational damage for all the people doing great EA work across the spectrum here.
I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
Insofar as you’re thinking about this as a question of coalitional politics, I can phrase it in those terms too: the more censorious EA becomes, the more truth-seeking people will disaffiliate from it. Habryka, who was one of the most truth-seeking people involved in EA, has already done so; I wouldn’t say it was directly because of EA not being truth-seeking enough, but I think that was one big issue for him amongst a cluster of related issues. I don’t currently plan to, but I’ve considered the possibility, and the quality of EA’s epistemic norms is one of my major considerations (of course, the forum’s norms are only a small part of that).
However, having said this, I don’t think you should support more open forum norms mostly as a concession to people like me, but rather in order to pursue your own goals more effectively. Movements that aren’t able to challenge foundational assumptions end up like environmentalists: actively harming the causes they’re trying to support.
Just to narrow in on a single point—I have found the ‘EA fundamentally depends on uncomfortable conversations’ point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don’t want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.
When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would’ve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
I’m not keen to take a stance on whether this post should or shouldn’t be allowed on the forum, but I am curious to hear if and where you would draw this line :)
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it’s cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
That’s not the sort of uncomfortable claim you’re worried about
But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but it’s a pretty mainstream position.
I could have made the point in more sensitive ways
Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and it’s hard to think of examples that are compelling without being controversial.
This is not the sort of thing people should be discussing on the forum
But EA as a movement is interested in things like:
Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on)
Promoting women’s rights (especially in the context of global health and extreme poverty reduction)
What factors make what types of foreign aid more or less effective
More generally, the relationship between the developed and the developing world
So this sort of debate does seem pretty relevant.
I think EA would’ve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
The important point is that we didn’t know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, it’s something like “the rest of academic philosophy/society at large moderating EA”, which seems much more likely to have stifled early EA’s ability to identify important issues and interventions.
(I also think we’ve ended up at some of the wrong points on some of these issues, but that’s a longer debate.)
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I’m still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread “the meat eater problem”
And as a sidebar yeah I wouldn’t have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) it’s interesting that age of marriage/consent conversations can be where classic left wing cultural relativism and gender safeguarding collide and don’t know which way to swing. We’ve had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you’re saying human lives don’t matter very much!), the ineffectiveness of development aid (controversial: you’re attacking powerful organizations!), transhumanism (controversial, according to the people who say it’s basically eugenics), etc.
Re “conversations can be had in more sensitive ways”, I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think it’s now a bit more mainstream).
This moderation policy seems absurd. The post in question was clearly asking purely hypothetical questions, and wasn’t even advocating for any particular answer to the question. May as well ban users for asking whether it’s moral to push a man off a bridge to stop a trolley, or ban Peter Singer for his thought experiments about infanticide.
Perhaps dstudiocode has misbehaved in other ways, but this announcement focuses on something that should be clearly within the bounds of acceptable discourse. (In particular, the standard of “content that could be interpreted as X” is a very censorious one, since you now need to cater to a wide range of possible interpretations.)
Ah, thanks, that’s important context—I semi-retract my strongly worded comment above, depending on exactly how bad the removed post was, but can imagine posts in this genre that I think are genuinely bad
I don’t like my mod message, and I apologize for it. I was rushed and used some templated language that I knew damn well at the time that I wasn’t excited about putting my name behind. I nevertheless did and bear the responsibility.
That’s all from me for now. The mods who weren’t involved in the original decision will come in and reconsider the ban, pursuant to the appeal.
In the post that prompted the ban, they asked whether murdering meat-eaters could be considered ethical. I don’t want to comment on whether this would be an appropriate topic for a late night philosophy club conversation, it is not an appropriate topic for the EA Forum.
I think speculating about what exactly constitutes the most good is perfectly on-topic. While ‘murdering meat-eaters’ is perhaps an overly direct phrasing (and of course under most ethical frameworks murder raises additional issues as compared to mere inaction or deprioritization), the question of whether the negative utility produced by one marginal person’s worth of factory farming outweighs the positive utility that person experiences—colloquially referred to as the meat-eater problem—is one that hasbeendiscussedherea numberoftimes, and that I feel is quite relevant to the question of which interventions should be prioritized.
I’d separate out the removal and the suspension, and dissent only as to the latter.
I get why the mods would feel the need to chart a wide berth around anything that some person could somehow “interpret[] as promoting violence or illegal activities.” Making a rule against brief hypothetical mentions of the possible ethics of murder is defensible, especially in light of certain practical realities.
However, I can’t agree with taking punitive action against a user where the case that they violated the norm is this tenuous and there is a lack of fair prior notice of the mods’ interpretation. For that kind of action, I think the minimum standard would be either clear notice or content that a reasonable person would recognize could reasonably be interpreted as promoting violence. In other words, was the poster negligent in failing to recognize that violence promotion was a reasonable interpretation?
I don’t think the violence-promoting interpretation is a reasonable one here, and it sounds like several other users agree—which I take as evidence of non-negligence.
3-4 minutes, mostly on playing through various elimination-order scenarios in my head and trying to ensure that my assigned values would still reflect my preferences in at least more likely scenarios.
The percentages I inputted were best guesses based on my qualitative impressions. If I’d been more quantitative about it, then I expect my allocations would have been better—i.e., closer to what I’d endorse on reflection. But I didn’t want to spend long on this, and figured that adding imperfect info to the commons would be better than adding no info.
IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn’t have to spend time learning more or thinking through tradeoffs.
It took me ~1 minute. I already had a favourite candidate so I put all my points towards that. I was half planning to come back and edit to add backup choices but I’ve seen the interim results now so I’m not going to do that.
Probably about 30 minutes of unfocused thought on the actual voting. Mainly it was spent negotiating between what I thought was sort of best and some guilt and status based obligation stuff.
On top of that I perhaps read 2-4 articles and chatted to 1-2 people involved in orgs. I guess that was 1- 3 hours.
I think around 5-10 mins? I tried to compare everything I cared at all about, so I only used multipliers between 0 and 2 (otherwise I would have lost track and ended up with intransitive preferences). The comparison stage took the most time. I edited things in the end a little bit, downgrading some charities to 0.
Reflection on my time as a Visiting Fellow at Rethink Priorities this summer
I was a Visiting Fellow at Rethink Priorities this summer. They’re hiring right now, and I have lots of thoughts on my time there, so I figured that I’d share some. I had some misconceptions coming in, and I think I would have benefited from a post like this, so I’m guessing other people might, too. Unfortunately, I don’t have time to write anything in depth for now, so a shortform will have to do.
Fair warning: this shortform is quite personal and one-sided. In particular, when I tried to think of downsides to highlight to make this post fair, few came to mind, so the post is very upsides-heavy. (Linch’s recent post has a lot more on possible negatives about working at RP.) Another disclaimer: I changed in various ways during the summer, including in terms of my preferences and priorities. I think this is good, but there’s also a good chance of some bias (I’m happy with how working at RP went because working at RP transformed me into the kind of person who’s happy with that sort of work, etc.). (See additional disclaimer at the bottom.)
First, some vague background on me, in case it’s relevant:
I finished my BA this May with a double major in mathematics and comparative literature.
I had done some undergraduate math research, had taught in a variety of contexts, and had worked at Canada/USA Mathcamp, but did not have a lot of proper non-Academia work experience.
I was introduced to EA in 2019.
Working at RP was not what I had expected (it seems likely that my expectations were skewed).
One example of this was how my supervisor (Linch) held me accountable. Accountability existed in such a way that helped me focus on goals (“milestones”) rather than making me feel guilty about falling behind. (Perhaps I had read too much about bad workplaces and poor incentive structures, but I was quite surprised and extremely happy about this fact.) This was a really helpful transition for me from the university context, where I often had to complete large projects with less built-in support. For instance, I would have big papers due as midterms (or final exams that accounted for 40% of a course grade), and I would often procrastinate on these because they were big, hard to break down, and potentially unpleasant to work on. (I got really good at writing a 15-page draft overnight.)
In contrast, at Rethink, Linch would help me break down a project into steps (“do 3 hours of reading on X subject,” “reach out to X person,” “write a rough draft of brainstormed ideas in a long list and share it for feedback,” etc.), and we would set deadlines for those. Accomplishing each milestone felt really good, and kept me motivated to continue with the project. If I was behind the schedule, he would help me reprioritize and think through the bottlenecks, and I would move forward. (Unless I’m mistaken, managers at RP had taken a management course in order to make sure that these structures worked well — I don’t know how much that helped because I can’t guess at the counterfactual, but from my point of view, they did seem quite prepared to manage us.)
Another surprise: Rethink actively helped me meet many (really cool) people (both when they did things like give feedback, and through socials or 1-1’s). I went from ~10 university EA friends to ~25 people I knew I could go to for resources or help. I had not done much EA-related work before the internship (e.g. my first EA Forum post was due to RP), but I never felt judged or less respected for that. Everyone I interacted with seemed genuinely invested in helping me grow. They sent me relevant links, introduced me to cool new people, and celebrated my successes.
I also learned a lot and developed entirely new interests. My supervisor was Linch, so it might be unsurprising that I became quite interested in forecasting and related topics. Beyond this, however, I found the work really exciting, and explored a variety of topics. I read a bunch of economics papers and discovered that the field was actually really interesting (this might not be a surprise to others, but it was to me!). I also got to fine-tune my understanding of and opinions on a number of questions in EA and longtermism. I developed better work (or research) habits, gained some confidence, and began to understand myself better.
Here’s what I come up with when I try to think of negatives:
I struggled to some extent with the virtual setting (e.g. due to tech or internet issues). Protip: if you find yourself with a slow computer, fix that situation asap.
There might have been too much freedom for me— I probably spent too long choosing and narrowing my next project topics. Still, this wasn’t purely negative; I think I ended up learning a lot during the exploratory interludes (where I went on deep-dives into things like x-risks from great power conflict, but they did not help me produce outputs). As far as I know, this issue is less relevant for more senior positions, and a number of more concrete projects are more straightforwardly available now. (It also seems likely that I could have mitigated this by realizing it would be an issue right away.)
I would occasionally fall behind and become stressed about that. A few tasks became ugh fields. As the summer progressed, I think I got better about immediately telling Linch when I noticed myself feeling guilty or unhappy about a project, and this helped a lot.
Opportunity cost. I don’t know exactly what I would have done during the summer if not RP, but it’s always possible it would have been better.
Obviously, if I were restarting the summer, I would do some things differently. I might focus on producing outputs faster. I might be more active in trying to meet people. I would probably organize my daily routine differently. But some of the things I list here are precisely changes in my preferences or priorities that result from working at RP. :)
I don’t know if anyone will have questions, but feel free to ask questions if you do have any. But I should note that I might not be able to answer many, as I’m quite low on free time (I just started a new job).
Note: nobody pressured me to write this shortform, although Linch & some other people at RP did know I was doing it and were happy for it. For convenience, here’s a link to RP’s hiring page.
Thanks for writing this Lizka! I agree with many of the points in this [I was also a visiting fellow on the longtermist team this summer]. I’ll throw my two cents in about my own reflections (I broadly share Lizka’s experience, so here I just highlight the upsides/downsides things that especially resonated with me, or things unique to my own situation):
Vague background:
Finished BSc in PPE this June
No EA research experience and very little academic research experience
Introduced to EA in 2019
Upsides:
Work in areas that are intellectually stimulating and feel meaningful (e.g. Democracy, AI Governance).
Become a better researcher. In particular, understanding reasoning transparency, reaching out to experts, the neglected virtue of scholarship, giving and receiving feedback, and being generally more productive. Of course, there is a difference between 1. Understanding these skills, and 2. internalizing & applying them, but I think RP helped substantially with the first and set me on the path to doing the second.
Working with super cool people. Everyone was super friendly, and clearly supportive of our development as researchers. I also had not written an EA forum post before RP, but was super supported to break this barrier.
Downsides:
Working remotely was super challenging for me. I underestimated how significant a factor this would be to begin with, and so I would not dismiss this lightly. However, I think there are ways that one can remedy this if they are sufficiently proactive/agent-y (e.g. setting up in-person co-working, moving cities to be near staff, using Focusmate, etc). Also, +1 to getting a fast computer (and see Peter’s comment on this).
Imposter syndrome. One downside of working with super cool, brilliant, hard working people was (for me) a feeling that I was way out of my depth, especially to begin with. This is of course different for everyone, but one thing I struggled to fully overcome. However, RP staff are very willing to help out where they can, should this become a problem.
Ugh fields. There were definitely times when I felt somewhat overwhelmed by work, with sometimes negative spirals. This wasn’t helped by personal circumstances, but my manager (Michael) was super accommodating and understanding of this, which helped alleviate guilt.
If it’s helpful, I might write-up a shortform on some of these points in more depth, especially the things I learnt about being a better researcher, if that’s helpful for others.
Overall, I also really enjoyed my time at RP, and would highly recommend :)
(I did not speak to anyone at RP before writing this).
Thanks a lot for writing about your experiences, Lizka and Tom! Especially the details about why you were happy with your managers was really valuable info for me.
Note to onlookers that we at Rethink Priorities will pay up to $2000 for people to upgrade their computers and that we view this as very important! And if you work with us for more than a year, you can keep your new computer forever.
I realize that this policy may not be a great fit for interns / fellows though, so perhaps I will think about how we can approach that.
I think we should maybe just send a new mid-end chromebook + high-end headsets with builtin mic + other computing supplies to all interns as soon as they start (or maybe before), no questions asked. Maybe consider higher end equipment for interns who are working on more compute-intensive stuff and/or if they or their managers asked for it.
For some of the intern projects (most notably on the survey team?), more computing power is needed, but since so much of RP work involves Google docs + looking stuff up fast on the internet + Slack/Google Meet comms, the primary technological bottlenecks that we should try to solve is really fast browsing/typing/videocall latency and quality, which chromebooks and headsets should be sufficient for.
(For logistical reasons I’m assuming that the easiest thing to do is to let the interns keep the chromebook and relevant accessories)
USAID has announced that they’ve committed $4 million to fighting global lead poisoning!
USAID Administrator Samantha Power also called other donors to action, and announced that USAID will be the first bilateral donor agency to join the Global Alliance to Eliminate Lead Paint. The Center for Global Development (CGD) discusses the implications of the announcement here.
For context, lead poisoning seems to get ~$11-15 million per year right now, and has a huge toll. I’m really excited about this news.
Also, thanks to @ryancbriggs for pointing out that this seems like “a huge win for risky policy change global health effective altruism” and referencing this grant:
In December 2021, GiveWell (or the EA Funds Global Health and Development Fund?) gave a grant to CGD to “to support research into the effects of lead exposure on economic and educational outcomes, and run a working group that will author policy outreach documents and engage with global policymakers.” In their writeup, they recorded a 10% “best case” forecast that in two years (by the end of the grant period), “The U.S. government, other international actors (e.g., bilateral and multilateral donors), and/or national LMIC governments take measurable action to reduce lead exposure—for example, through increased funding for lead mitigation and research, increased monitoring of lead exposure, and/or enactment of regulations.” We’ve reached this best case and it’s been almost exactly two years! (Attributing credit is really hard and I have no experience and little context in this area — as far as I know this could have happened without that grant or related advocacy. But it’s still notable to me that a CGD report is cited in Power’s announcement.)
This is awesome! Is there a page somewhere that collates the results of a bunch of internal forecasting by the end of the grant period? I’d be interested
A note on how I think about criticism
(This was initially meant as part of this post,[1] but while editing I thought it didn’t make a lot of sense there, so I pulled it out.)
I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I’d try to share some notes.
...
It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think.
Here’s a list of things that I believe about criticism:
Criticism or critical information can be extremely valuable. It can be hard for people to surface criticism (e.g. because they fear repercussions), which means criticism tends to be undersupplied.[3] Requiring critics to present their criticisms in specific ways will likely stifle at least some valuable criticism. It can be hard to get yourself to engage with criticism of your work or things you care about. It’s easy to dismiss true and important criticism without noticing that you’re doing it.
→ Making sure that your community’s culture appreciates criticism (and earnest engagement with it), tries to avoid dismissing critical content based on stylistic or other non-fundamental qualities, encourages people to engage with it, and disincentivizes attempts to suppress it can be a good way to counteract these issues.
At the same time, trying to actually do anything is really hard.[4] Appreciation for doers is often undersupplied. Being in leadership positions or engaging in public discussions is a valuable service, but opens you up to a lot of (often stressful) criticism, which acts as a disincentive for being public. Psychological safety is important in teams (and communities), so it’s unfortunate that critical environments lead more people to feel like they would be judged harshly for potential mistakes. Not all criticism is useful enough to be worth engaging with (or sharing). Responding to criticism can be time-consuming or otherwise costly and isn’t always worth it.[5] Sometimes people who are sharing “criticism” hate the project for reasons that aren’t what’s explicitly stated, or just want to vent or build themselves up.[6]
… and cultures like the one described above can exacerbate these issues.
I don’t have strong overall recommendations. Here’s a post on how I want to handle criticism, which I think is still accurate. I also (tentatively) think that on the margin, the average person in EA who is sharing criticism of someone’s work should probably spend a bit more time trying to make that criticism productive. And I’d be excited to see more celebration or appreciation for people’s work. (I also discussed related topics in this short EAG talk last year.)
This was in that post because I ended up engaging with a lot of discussion about the effects of criticism in EA (and of the EA Forum’s critical culture) as part of running a Criticism Contest (and generally working on CEA’s Online Team).
I’ve experienced first-hand how hard it is to identify flaws in projects you’re invested in, I’ve seen how hard it is for some people to surface critical information, and noticed some ways in which criticism can be shut down or disregarded by well-meaning people.
See also the rationale in our Criticism Contest announcement post.
Kinda related: EA should taboo “EA should”
See the “transparency” example in my post on “missing moods”.
Also: You don’t have to respond to every comment.
A lot of what Richard says in Moral Misdirection (and in Anti-Philanthropic Misdirection) also seems true and relevant here.
I would be excited about this and have wondered for a while if we should have EA awards. This Washington post article brought the idea to my mind again:
A note on mistakes and how we relate to them
(This was initially meant as part of this post[1], but I thought it didn’t make a lot of sense there, so I pulled it out.)
“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious.
When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I’ve made a mistake like this.
I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other “mistakes” I’ve made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things.
This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I’d identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I’d received.
...
This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with.
This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction.
...
I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”.
An image from that talk.
It was there because my role gave me the opportunity to actually notice a lot of the mistakes I was making (something that I think is harder if you’re working on something like research, or in a less public role), which also meant I could reflect on them.
If you have better terms for these, I’d love suggestions!
After reading your post, I wasn’t sure you were right about this. But after thinking about it for a few minutes, I can’t come up with any serious mistakes I’ve made that were “point-in-time blunders”.
The closest thing I can think of is when I accidentally donated $20,000 to the GiveWell Community Foundation instead of The Clear Fund (aka GiveWell), but fortunately they returned the money so it all worked out.
If you feel overwhelmed by FTX-collapse-related content on the Forum, you can hide most of it by using a tag filter: hover over the “FTX collapse” tag on the Frontpage (find it to the right of the “Frontpage Posts” header), and click on “Hidden.”
[Note: this used to say “FTX crisis,” and that might still show up in some places.]
Vasili Arkhipov is discussed less on the EA Forum than Petrov is (see also this thread of less-discussed people). I thought I’d post a quick take describing that incident.
Arkhipov & the submarine B-59’s missile
On October 27, 1962 (during the Cuban Missile Crisis), the Russian diesel-powered submarine B-59 started experiencing[1] nearby depth charges from US forces above them; the submarine had been detected and US ships seemed to be attacking. The submarine’s air conditioning was broken,[2] CO2 levels were rising, and B-59 was out of contact with Moscow. Two of the senior officers on the submarine, thinking that a global war had started, wanted to launch their “secret weapon,” a 10-kiloton nuclear torpedo. The captain, Valentin Savistky, apparently exclaimed: “We’re gonna blast them now! We will die, but we will sink them all — we will not become the shame of the fleet.”
The ship was authorized to launch the torpedo without confirmation from Moscow, but all three senior officers on the ship had to agree.[3] Chief of staff of the flotilla Vasili Arkhipov refused. He convinced Captain Savitsky that the depth charges were signals for the Soviet submarine to surface (which they were) — if the US ships really wanted to destroy the B-59, they would have done it by now. (Part of the problem seemed to be that the Soviet officers were used to different signals than the ones the Americans were using.) Arkhipov calmed the captain down[4] and got him to surface the submarine to get orders from the Kremlin, which ended up eventually defusing the situation.
(Here’s a Vox article on the incident.)
The B-59 submarine.
Vadim Orlov described the impact of the depth charges as being inside an oil drum getting struck with a sledgehammer.
Temperatures were apparently above 45ºC (113ºF).
The B-59 was apparently the only submarine in the flotilla that required three officers’ approval in order to fire the “special weapon” — the others only required the captain and the political offer to approve the launch.
From skimming some articles and first-hand accounts, it seems unclear if the captain just had an outburst and then accurately wanted to follow protocol (and use the missile), or if he was truly reacting irrationally/emotionally because of the incredibly stressful environment. Accounts conflict a bit, and my sense is that orders around using the missile were unclear and overly permissive (or even encouraging towards using the missile).
For anyone interested in watching a dramatic reconstruction of this incident, go to timestamp 43:30–47:05 of The Man Who Saved The World. (I recommend watching at 1.5x speed.)
Edit: I’ve now shared: Donation Election: how voting will work. Really grateful for the discussion on this thread!
We’re planning on running a Donation Election for Giving Season.
What do you think the final voting mechanism should be, and why? E.g. approval voting, ranked-choice voting, quadratic voting, etc.
Considerations might include: how well this will allocate funds based on real preferences, how understandable it is to people who are participating in the Donation Election or following it, etc.
I realize that I might be opening a can of worms, but I’m looking forward to reading any comments! I might not have time to respond.
Some context (see also the post):
Users will be able to “pre-vote” (to signal that they’re likely to vote for some candidates, and possibly to follow posts about some candidates), for as many candidates as they want. The pre-votes are anonymous (as are final votes), but the total numbers will be shown to everyone. There will be a separate process for final voting, which will determine the three winners in the election. The three winners will receive the winnings from the Donation Election Fund, split proportionally based on the votes.
Only users who had an account as of October 22, 2023, will be able to vote, unfortunately. We’ve had to add this restriction to avoid election manipulation (we’ll also be monitoring in other ways). I realize that this limits genuine new users’ ability to vote, but hopefully the fact that newer users can participate in other ways (like by encouraging others to vote for some candidates, or by donating to candidates/the Donation Election Fund) helps a bit.
A quick preview of the pre-votes, in case you’re interested:
I’m a researcher on voting theory, with a focus on voting over how to divide a budget between uses. Sorry I found this post late, so probably things are already decided but I thought I’d add my thoughts. I’m going to assume approval voting as input format.
There is an important high-level decision to make first regarding the objective: do we want to pick charities with the highest support (majoritarian) or do we want to give everyone equal influence on the outcome if possible (proportionality)?
If the answer is “majoritarian”, then the simplest method makes the most sense: give all the money to the charity with the highest approval score. (This maximizes the sum of voter utilities, if you define voter utility to be the amount of money that goes to the charities a voter approves.)
If the answer is “proportionality”, my top recommendation would be to drop the idea of having only 3 winners and not impose a limit, and instead use the Nash Product rule to decide how the money is split [paper, wikipedia]. This rule has a nice interpretation where let’s say there are 100 voters, then every voter is assigned 1/100th of the budget and gets a guarantee that this part is only spent on charities that the voter has approved. The exact proportions of how the voter share is used is decided based on the overall popularity of the charities. This rule has various nice properties, including Pareto efficiency and strong proportionality properties (guaranteeing things like “if 30% of voters vote for animal charities, then 30% of the budget will be spent on animal charities”).
If you want to stick with the 3 winner constraint, there is no academic research about this exact type of voting situation. But if proportionality is desired, I would select the 3 winners as not the 3 charities with the highest vote score, but instead use Proportional Approval Voting [wikipedia] to make the selection. This would avoid the issue that @Tetraspace identified in another comment, where there is a risk that all 3 top charities are similar and belong to the largest subgroup of voters. Once the selection of 3 charities is done, I would not split the money in proportion to approval scores but either (a) split it equally, or (b) normalize the scores so that a voter who approved 2 of the 3 winners contributes 0.5 points to each of them, instead of 1 point to each. Otherwise those who approved 2 out of 3 get higher voting weight.
I’m happy to discuss further.
This definition of “voter utility” feels very different to how EAs think about charities: the definition would imply that you are indifferent between all charities that you approve of. A better definition of “voter utility” would take into account the relative worth of the charities (eg a voter might think that charitiy A is 3x better than charity B, which is 5x better than charity C).
I think since there can be multiple winners, letting people vote on the ideal distribution then averaging those distributions would be better than direct voting, since it most directly represents “how voters think the funds should be split on average” or similar, which seems like what you want to capture? And also is still very understandable I hope.
E.g. if I think 75% of the pool should go to LTFF and 20% to GiveWell, and 5% to the EA AWF, 0% to all the rest, I vote 75%/20%/5%/0%/0%/0% etc. Then, you take the average of those distributions across all voters. I guess it gets tricky if you are only paying out to the top three, but maybe you can just scale their percentage splits? IDK.
If not that or if it is annoying to implement, IMO approval voting or quadratic are probably best, but am not really sure. Ranked choice feels like it is so explicitly designed for single winner elections that it is harder to apply here.
If we’re thinking of it as “ideally I’d like 75% of the money to go here, 20% here, etc” we could just give people 100 votes each and give money to the top 3?
Yeah definitely—that’s a more elegant way.
This would be very similar to first-past-the-post (third-past-the-post in this case), and has many of the same drawbacks as first-past-the-post, such as lots of strategic voting. Giving a voice to people who’s favorite charities are not wildly popular seems preferable (as would be the case with ranked-choice voting). The fact that you have 100 votes instead of 1 vote doesnt make much of a difference here (imagine a country where everyone has 99 clones, election systems would mostly still have the same advantages and disadvantages).
some thoughts on different mechanisms:
Quadratic voting:
I think this could be fun. An advantage here is that voters have to think about the relative value of different charities, rather than just deciding which are better or worse. This could also be an important aspect when we want people to discuss how they plan to vote/how others should vote. If you want to be explicit about this, you could also consider designing the user interface so that users enter these relative differences of charities directly (e.g. “I vote charity A to be 3 times as good as charity B” rather than “I assign 90 vote credits to charity A and 10 vote credits to charity B”). Note however, that due to the top-3 cutoff, putting in the true relative differences between charities might not be the optimal policy.
A technical remark: If you want only to do payouts for the top three candidates, instead of just relying on the final vote, I think it would be better to rescale the voting credits of each voter after kicking out the charity with the least votes and then repeating the process until there are only 3 charities left. This would reduce tactical voting and would respect voters more who pick unusual charities as their top choices. This process has some similarities with ranked-choice voting. Additionally, users should have the ability to enter large relative differences (or very tiny votes like 1 in a billion), so their votes are still meaningful even after many eliminations.
Approval voting:
I think voting either “approve” or “disapprove” does not match how EAs think about charities. I generally approve a lot of charities within EA space, but would not vote “approve” for these charities.
I worry that a lot of tactical voting can take place here, especially if people can see the current votes or the pre-votes. For example, a person who both approves of the 3rd-placed charity and the 4th-placed charity (by overall popularity), might want to switch their vote to “disapprove” for the (according to them) worse charity. For example, voters are incentivized to give different votes to the 3rd-placed and 4th-placed charity, because there the difference will have the biggest impact on money paid out. Or a person who disapproves of all the top charities might switch a vote from “disapprove” to “approve” so that their vote matters at all.
Ranked-choice voting:
I am assuming here that the elimination process in ranked-choice stops once you reach the top 3 and that votes are then distributed proportionally. I think this would be a good implementation choice (mostly because proportional voting itself would be a decent choice by itself, so doing it for the top 3 seems reasonable). Ranking charities could be more satisfying for voters than having to figure out where to draw the line between “approve” and “disapprove”, or putting in lots of numeric values.
Generally, ranked-choice voting seems like an ok choice.
how well will these allocate funds?:
I am quite unsure here, and finding a best charity based on expressed preferences of lots of people with lots of opinions will be difficult in any case. My best guess here is that ranked-choice voting > quadratic voting > approval voting. A disadvantage of quadratic voting here is that it can happen that some fraction of the money will be paid out to sub-optimal charities (even if everyone agrees that charity C is worse than A and B, then it will likely still be rational for voters to assign non-zero weight to charity C, corresponding to non-zero payout).
understandability:
I think approval voting is easier to understand than ranked-choice voting, which is easier to understand than quadratic voting. This is both for the user interface and for understanding the whole system. Also, the mental effort for making a voting decision is less under ranked-choice and approval voting. I think the precise effects of the voters choices will be difficult to estimate in any system, so keeping
general remarks:
Different voting mechanisms can be useful for different purposes, and paying 3 charities different amounts of money is a different use case than selecting a single president, so not all considerations and analyses of different voting mechanisms will carry over to our particular case. The top-3 rule will incentivize tactical voting in all these systems (whereas in a purely proportional system there would be no tactical voting). Maybe this number should be increased a bit (especially if we use quadratic voting). If there are lots of charities to choose from, it will be quite an effort to evaluate all these charities. Potentially, you could give each voter a small number of charities to compare with each other, and then aggregate the result somehow (although that would be complicated and would change the character of the election). Or there can be two phases of voting, where the first phase narrows it down to 3-5 charities and then the second phase determines the proportions.
My personal preferences:
Obviously, we should have a meta-vote to select the three top voting methods among user-suggested voting methods and then hold three elections with the respective voting methods, each determining how a fraction of the fund (proportional to the vote that the voting method received in the meta-vote) gets distributed. And as for the voting method for this meta-vote, we should use… ok, this meta-voting stuff was not meant entirely seriously.
In my current personal judgement, I prefer quadratic voting over ranked-choice and ranked-choice over approval voting. I might be biased here towards more complex systems. I think an important factor is also that I might like more data about my preferences as a voter: With quadratic voting, I can express my relative preferences between charities quantitatively. With ranked-choice voting, I can rank charities, but cannot say by how much I prefer one charity over another. With approval voting, I can put charities in only two categories.
One issue that comes up with multi-winner approval voting is: suppose there are 15 longtermists and 10 global poverty people. All the longtermists approve the LTFF, MIRI, and Redwood; all the global poverty people approve the Against Malaria Foundation, GiveWell, and LEEP.
The top three vote winners are picked: they’re the LTFF, with 15 votes, MIRI, with 15 votes, and Redwood, with 15 votes.
It is maybe undesirable that 40% of the people in this toy example think those charities are useless, yet 0% of money is going to charities that aren’t those. (Or maybe it’s not! If a coin lands heads 60% of the time; then you bet on heads 100% of the time.)
I’m going to stick my neck out and say that approval voting is the best option here. Why?
It avoids almost all of the problems with plurality voting. In non-pathological arrangments of voter preferences and candidates, it will produce the ‘intuitively’ correct option—see here for some fun visualisations.
It has EA cred, see Aaron Hamlin’s interview on 80k here
And most importantly, it’s understandable and legible—you don’t need people to trust an underlying apportionment algorithm or send the flyers explaining the D’Hondt method to voters or whatever. Just vote for the options you approve of on the ballot. One person, one ballot. Most approvals wins. Simple.
I fear that EAs who are really into this sort-of thing are going to nerd-snipe the whole thing into a discussion/natural experiment about optimal voting systems instead of what would be most practical for this Donation Election. A lot of potential voters and donors may not be interested in using a super fancy optimal but technically involved voting method, and be the kind of small inconvenience that might turn people off the whole enterprise.
Now, before all you Seeing Like a State fans come at me saying how legibility is the devil’s work I think I’m just going to disagree with you pre-emptively.[1] Sometimes there is a tradeoff between fidelity and legibility, and too much weighting on illegible technocracy can engender a lack of trust and have severe negative consequences.
Actually it’s interesting that Glen references Scott as on his side, I think there’s actually some tension between their positions. But that’s probably a topic for another post/discussion
Won’t people be motivated to disapprove vote orgs in all cause areas but their preferred one? That would seemingly reduce approval voting to FPTP as between cause areas in effect.
Well, the top 3 charities will get chosen, so there’s no benefit to you only selecting 1 option alone unless you really do believe only that 1 charity ought to get funded. I think AV may be more robust to these concerns than some think,[1] all I think all voting systems will have these edge cases.
I also may be willing simply bite the bullet here and trade-off a bit of strategic voting for legibility. But again, I don’t think approval is worse than this than many other voting methods.
But my fundamental objection is that this is primarily a normative problem, where we want to be a community who’ll vote honestly and not strategically. If GWWC endorse approval voting, then when you submit your votes there could be a pop-up with “I pledge not to vote strategically” or something like that.
I don’t think any voting system is immune to that—Democracy works well because of the norms it spreads and trust it instills, as opposed to being the optimal transmission mechanism of individual preferences to a social welfare function imho.
or here: https://link.springer.com/chapter/10.1007/978-1-4419-7539-3_2
Thanks. I assume there will be at least 3 orgs for each cause area.
If we can assume the forum is a “community who’ll vote honestly and not strategically,” approval voting would work—but we shouldn’t limit the winners to three in that case. Proportional representation among all orgs with net positive approval would be the fullest extent of the community’s views, although some floor on support or cap on winners would be necessary for logistical reasons.
I’d prefer a voting mechanism that factored in as much of the vote as possible. I suspect that cause area will be a major determinant of individuals’ votes, and would prefer that the voting structure promote engagement and participation for people with varying cause prioritizations.
Suppose we have 40% for cause A orgs, 25% for cause B orgs, 20% for cause C orgs, and 15% for various smaller causes. I would not prefer a method likely to select three organizations from cause A—I don’t think that outcome would be actually representative of the polis, and voting rules that would lead to such an outcome will discourage engagement and participation from people who sense that their preferred causes are not the leading one.
I’m not sure how to effectuate that preference in a voting system, although maybe people who have thought about voting systems more deeply than I could figure it out. I do think approval voting would be problematic; some voters might strategically disapprove all candidates except in their preferred cause area, which could turn the election into a cause-area election rather than an organization-specific one. Otherwise, it might be appropriate to assign each organization to a cause area, and provide that (e.g.) no more than half of all funds will go to organizations in the same cause area. If that rule were invoked, it would likely require selecting additional organizations than the initial three.
The more I think about this, the more I’d like at least one winner to be selected randomly among orgs that reach a certain vote threshold—unsure if it should be weighted by vote total or equal between orgs. Maybe that org gets 15 to 20 percent of the take? That’s a legible way to keep minority voices engaged despite knowing their preferences won’t end up reflected in the top three.
Proportional voting with some number of votes. between 1 and 10.
If it were me, the thing I’d experiment on is being able to donate votes to someone else. That feels like something I’d like to see more of on a larger scale. I give a vote to Jenifer and Alan, she researches longterm stuff, he looks into animal welfare.
FWIW, I mildly disagree with this, because a major part of the appeal of donation elections stuff (if done well) is that the results more closely model a community consensus than other giving mechanisms, and being able to donate votes would distort that in some sense. I think I don’t see the appeal of being able to donate votes in this context over just telling Jenifer + Alan that they can control where one donates to some extent, or donating to a fund. Or, if not donating to the election fund, just asking Jenifer + Alan for their opinion and changing your own mind accordingly.
edit: I should have read the post more carefully
Do you intend to have one final winner or would it be ok to pay out the fund to various charities in different proportions (maybe with a minimum payout to avoid logistical hassle)? In the latter case, a consideration could also be proportional voting. But it is not clear how approval voting and ranked choice would work exactly in those cases.Also, am I understanding correctly that donating more to that fund does not get you additional votes?We’re planning on having 3 winners, and we’ll allocate the funding proportionally across those three winners. So e.g. if we do approval voting, and candidate A gets 5 votes, B gets 2, C gets 20, and D gets 25, and we’re distributing $100, then A (5 votes), C (20 votes), and D win (25 votes) and we’d send $10 to A, $40 to C, and $50 to D. I think this would straightforwardly work with quadratic voting (each person just has multiple vote-points). I haven’t thought enough about how “proportional” allocation would work with ranked-choice votes.
And yep, donating more to that fund won’t get you additional votes.
I keep coming back to this map/cartogram. It’s just so great.
I tried to do something similar a while ago looking at under-5 mortality.
Superman gets to business [private submission to the Creative Writing Contest from a little while back]
“I don’t understand,” she repeated. “I mean, you’re Superman.”
“Well yes,” said Clark. “That’s exactly why I need your help! I can’t spend my time researching how to prioritize while I should be off answering someone’s call for help.”
“But why prioritize? Can’t you just take the calls as they come?”
Lois clicked “Send” on the email she’d been typing up and rejoined the conversation. “See, we realized that we’ve been too reactive. We were taking calls as they came in without appreciating the enormous potential we had here. It’s amazing that we get to help people who are being attacked, help people who need our help, but we could also make the world safer more proactively, and end up helping even more people, even better, and when we realized that, when that clicked—”
“We couldn’t just ignore it.”
Tina looked back at Clark. “Ok, so what you’re saying is that you want to save people— or help people — and you think there are better and worse ways you could approach that, but you’re not sure which are which, and you realized that instead of rushing off to fight the most immediate threat, you want to, what, do some research and find the best way you can help?”
“Yes, exactly, except, they’re not just better, we think they might be seriously better. Like, many times better. The difference between helping someone who’s being mugged, which by the way is awful, so helping them is already pretty great, but imagine if there’s a whole city somewhere that needs water or something, and there are people dying, and I could be helping them instead. It’s awful to ignore the mugging, but if I’m going there, I’m ignoring the city, and of those...”
“Basically, you’re right, Tina, yes,” said Lois.
“Ok,” Tina felt like she was missing something. “But Lois, you’re this powerful journalist, and Clark, you’re Superman. You can read at, what, 100 words per second? Doesn’t it make more sense for you to do the research? I’d need to spend hours reading about everything from food supply chains in Asia to, I dunno, environmental effects of diverting rivers or something, and you could have read all the available research on this in a week.”
“It’s true, Clark reads fast, and we were even trying to split the research up like that at some point,” said Lois. “But we also realized that the time that Clark was spending reading, even if it wasn’t very long, he could be spending chasing off the villain of the week or whatever. And I couldn’t get to all the research in time. I tried for a while, but I have a job, I need to eat, I need to unwind and watch Spanish soap operas sometimes. I was going insane. So we’ve been stuck in this trap of always addressing the most urgent thing, and we think we need help. Your help.”
“Plus, we don’t even really know what we need to find out. I don’t know which books I should be reading. It’s not even just about how to best fix the problem that’s coming up, like the best way to help that city without water. It’s also about finding new problems. We could be missing something huge.”
“You mean, you need to find the metaphorical cities without water?” Clark was nodding. Lois was tapping out another email. “And you should probably be widening your search, too. Not just looking at people specifically, or looking for cities without water, but also looking for systems to improve, ways to make people healthier. Animals, too, maybe. Aliens? Are there more of you? I’m getting off track.” Tina pulled out the tiny notebook her brother gave her and began jotting down some questions to investigate.
“So, are you in?” Lois seemed a bit impatient. Tina set the notebook aside, embarrassed for getting distracted.
“I think so. I mean, this is crazy, I need to think about it a bit. But it makes sense. And you need help. You definitely shouldn’t be working as a journalist, Clark. I mean, not that I’m an expert, really, but—”
“You kind of are. The expert.” Tina absently noted that Clark perfectly fit her mental image of a proper Kansas farm boy. He was even wearing plaid.
“If you accept the offer.” Lois said, without looking up from her email.
“That’s a terrifying thought. It feels like there should be more people helping, here. You should have someone sanity-checking things. Someone looking for flaws in my reasoning. You should maybe get a personal assistant, too— that could free up a massive amount of your time, and hopefully do a ton of good.” Tina knew she was hooked, but wanted to slow down, wanted to run this whole situation by a friend, or maybe her brother. “Can I tell someone about this? Like, is all of this secret?”
Clark shook his head. “We don’t want to isolate you from your friends or anything. But there will be things that need to be secret. And we’ve had trouble before— secrets are hard—” Clark glanced apologetically at Lois, who looked up from her frantic typing for long enough to shoot him a look, “But as much as possible, we don’t want to fall into bad patterns from the past.”
“I guess there are some dangers with information leaking. You probably have secret weaknesses, or maybe you know things that are dangerous—” Tina’s mind was swirling with new ideas and new worries. “Wait a second, how did you even find me? How do you know I’m not going to, like, tell everyone everything...”
Clark and Lois looked at each other.
“We didn’t really think that through very much. You seemed smart, and nice, and you’d started that phone-an-anonymous-friend service in college. And you wrote a good analysis when we asked you to. Sorry about the lie about the consulting job, by the way.”
“And you really need help.” Tina nodded. “Ok, we definitely need to fine-tune the hiring process. And I’ll start by writing down a list of some key questions.”
“I’ll order takeout,” said Lois, and pulled out her phone.
[I wrote and submitted this shortly before the deadline, but was somewhat overwhelmed with other stuff and didn’t post it on the Forum. I figured I’d go ahead and post it now. (Thanks to everyone who ran, participated in, or encouraged the contest by reading/commenting!]
Great!
A similar story exists here: https://archiveofourown.org/works/30351690
I really liked this. It was simply, but a smooth read and quite enjoyable. I’d be happy to see more of this type of content.
I recently ran a quick Fermi workshop, and have been asked for notes several times since. I’ve realized that it’s not that hard for me to post them, and it might be relatively useful for someone.
Quick summary of the workshop
What is a Fermi estimate?
Walkthrough of the main steps for Fermi estimation
Notice a question
Break it down into simpler sub-questions to answer first
Don’t stress about the details when estimating answers to the sub-questions
Consider looking up some numbers
Put everything together
Sanity check
Different models: an example
Examples!
Discussion & takeaways
Resources
Guesstimate is a great website for Fermi estimation (although you can also use scratch paper or spreadsheets if that’s what you prefer)
This is a great post on Fermi estimation
In general, you can look at a bunch of posts tagged “Fermi Estimation” on LessWrong or look at the Forum wiki description
Disclaimers:
I am not a Fermi pro, nor do I have any special qualifications that would give me credibility :)
This was a short workshop, aimed mostly at people who had done few or no Fermi estimates before
***************
***************
***************
***************
***************
***************
***************
***************
***************
***************
I attended and thoroughly enjoyed your workshop! Thanks for posting these notes
Thanks for coming to the workshop, and for writing this note!
I don’t see mention of quantifying the uncertainty in each component and aggregating this (usually via simulation). Is this not fundamental to Fermi? (Is it only a special version of Fermi, the “Monte Carlo” version?)
Uncertainty is super important, and it’s really useful to flag. It’s possible I should have brought it up more during the workshop, and I’ll consider doing that if I ever run something similar.
However, I do think part of the point of a Fermi estimate is to be easy and quick.
In practice, the way I’ll sometimes incorporate uncertainty into my Fermis is by running the numbers in three ways:
my “best guess” for every component (2 hours of podcast episode, 100 episodes),
the “worst (reasonable) case” for every component (only 90? episodes have been produced, and they’re only 1.5 hours long, on average), and
the “best case” for every component (150 episodes, average of 3 hours).
Then this still takes very little time and produces a reasonable range: ~135 to 450 hours of podcast (with a best guess of 200 hours). (Realistically, if I were taking enough care to run the numbers 3 times, I’d probably put more effort into the “best guess” numbers I produced.) I also sometimes do something similar with a spreadsheet/more careful Fermi.
I could do something more formal with confidence intervals and the like, and it’s truly possible I should be doing that. But I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time, or to see if there are big obvious differences that are being missed because the natural components being considered are clunky and incompatible (before they’re put together to produce the numbers we actually care about).
Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.
I agree, but making uncertainty explicit makes it even better. (And I think it’s an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.
I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.
(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)
“Worst case all parameters” is very unlikely. So is “best case everything”.
See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.
Time-of-perils- or existential-risk-themed image I made with DALL-E:
Moderation updates
Moderation update: We have indefinitely banned 8 accounts[1] that were used by the same user (JamesS) to downvote some posts and comments from Nonlinear and upvote critical content about Nonlinear. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
my_bf_is_hot, inverted_maslow, aht_me, emerson_fartz, daddy_of_upvoting, ernst-stueckelberg, gpt-n, jamess
Was emerson_fartz an acceptable username in the first place? (It may not have had a post history in which case no one may have noticed its existence before the sockpuppeting detection, but that sounds uncivil toward a living person)
It was not, and indeed it was only used for voting, so we noticed it only during this investigation
Moderation update:
We have strong reason to believe that Torres (philosophytorres) used a second account to violate their earlier ban. We feel that this means that we cannot trust Torres to follow this forum’s norms, and are banning them for the next 20 years (until 1 October 2042).
LukeDing (and their associated alt account) has been banned for six months, due to voting & multiple-account-use violations. We believe that they voted on the same comment/post with two accounts more than two hundred times. This includes several instances of using an alt account to vote on their own comments.
This is against our Forum norms on voting and using multiple accounts. We will remove the duplicate votes.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.
We also want to add:
LukeDing appealed the decision; we will reach out to them and ask them if they’d like us to feature a response from them under this comment.
As some of you might realize, some people on the moderation team have conflicts of interest with LukeDing, so we wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed weren’t aware of LukeDing’s identity (they only saw anonymized information).
Is more information about the appellate process available? The guide to forum norms says “We’re working on a formal process for reviewing submissions to this form, to make sure that someone outside of the moderation team will review every submission, and we’ll update this page when we have a process in place.”
The basic questions for me would include: information about who decides appeals, how much deference (if any) the adjudicator will give to the moderators’ initial decision—which probably should vary based on the type of decision at hand, and what kind of contact between the mods and appellate adjudicator(s) is allowed. On the last point, I would prefer as little ex parte contact if possible, and would favor having an independent vetted “advocate for the appellant” looped in if there needs to be contact to which the appellant is not privy.
Admittedly I have a professional bias toward liking process, but I would err on the side of more process than less where accounts are often linked to real-world identities and suspensions are sometimes for conduct that could be seen as dishonest or untrustworthy. I would prefer public disclosure of an action taken in cases like this only after the appellate process is complete for the same reasons, assuming the user timely indicates a desire to appeal the finding of a norm violation.
Finally, I commend keeping the moderators deciding whether a violation occurred blinded as to the user’s identity as a best practice in cases like this, even where there are no COIs. It probably should be revealed prior to determining a sanction, though.
It does intuitively seem like an immediate temporary ban, made public only after whatever appeals are allowed have been exhausted, should give the moderation team basically everything they need while being more considerate of anyone whose appeals are ultimately upheld (i.e. innocent, or mitigating circumstances).
Moderation update: A new user, Bernd Clemens Huber, recently posted a first post (“All or Nothing: Ethics on Cosmic Scale, Outer Space Treaty, Directed Panspermia, Forwards-Contamination, Technology Assessment, Planetary Protection, (and Fermi’s Paradox)”) that was a bit hard to make sense of. We hadn’t approved the post over the weekend and hadn’t processed it yet, when the Forum team got an angry and aggressive email today from the user in question calling the team “dipshits” (and providing a definition of the word) for waiting throughout the weekend.
If the user disagrees with our characterization of the email, they can email us to give permission for us to share the whole thing.
We have decided that this is not a promising start to the user’s interactions on the Forum, and have banned them indefinitely. Please let us know if you have concerns, and as a reminder, here are the Forum’s norms.
Update: this user returned to the Forum yesterday to re-post the same piece. I’ve banned that account as well. Bans affect the user, not the account.
Moderation update:
We have strong reason to believe that Charles He used multiple new accounts to violate his earlier 6-month-long ban. We feel that this means that we cannot trust Charles He to follow this forum’s norms, and are banning him from the Forum for the next 10 years (until December 20, 2032).
We have already issued temporary suspensions to several suspected duplicate accounts, including one which violated norms about rudeness and was flagged to us by multiple users. We will be extending the bans for each of these accounts to mirror Charles’s 10-year ban, but are giving the users an opportunity to message us if we have made any of those temporary suspensions in error (and have already reached out to them). While we aren’t >99% certain about any single account, we’re around 99% that at least one of these is Charles He.
You can find more on our rules for pseudonymity and multiple accounts here. If you have any questions or concerns about this, please also feel free to reach out to us at forum-moderation@effectivealtruism.org.
I find this reflects worse on the mod team than Charles. This is nowhere near the first time I’ve felt this way.
Fundamentally, it seems the mod team heavily prioritizes civility and following shallow norms above enabling important discourse. The post on forum norms says a picture of geese all flying in formation and in one direction is the desirable state of the forum; I disagree that this is desirable. Healthy conflict is necessary to sustain a healthy community. Conflict sometimes entails rudeness. Some rudeness here and there is not a big deal and does not need to be stamped out entirely. This also applies to the people who get banned for criticizing EA rudely, even when they’re criticizing EA for its role in one of the great frauds of modern history. Banning EA critics for minor reasons is a short-sighted move at best.
Banning Charles for 10 years (!!) for the relatively small crime of evading a previous ban is a seriously flawed idea. Some of his past actions like doxxing someone (without any malice I believe) are problematic and need to be addressed, but do not deserve a 10 year ban. Some of his past comments, especially farther in the past, have been frustrating and net-negative to me, but these negative actions are not unrelated to some of his positive traits, like his willingness to step out of EA norms and communicate clearly rather than like an EA bot. The variance of his comments has steadily decreased over time. Some of his comments are even moderator-like, such as when he warned EA forum users not to downvote a WSJ journalist who wasn’t breaking any rules. I note that the mod team did not step in there to encourage forum norms.
I also find it very troubling that the mod team has consistent and strong biases in how it enforces its norms and rules, such as not taking any meaningful action against an EA in-group member for repeated and harmful violations of norms but banning an EA critic for 20 years for probably relatively minor and harmless violations. I don’t believe Charles would have received a similar ban if he was an employee of a brand name EA org or was in the right social circles.
Finally, as Charles notes, there should be an appeals process for bans.
I don’t think repeatedly evading moderator bans is a “relatively small crime”. If Forum moderation is to mean anything at all, it has to be consistently enforced, and if someone just decides that moderation doesn’t apply to them, they shouldn’t be allowed to post or comment on the Forum.
Charles only got to his 6 month ban via a series of escalating minor bans, most of which I agreed with. I think he got a lot of slack in his behaviour because he sometimes provided significant value, but sometimes (with insufficient infrequency) behaved in ways that were seriously out of kilter with the goal of a healthy Forum.
I personally think the 10-year thing is kind of silly and he should just have been banned indefinitely at this point, then maybe have the ban reviewed in a little while. But it’s clear he’s been systematically violating Forum policies in a way that requires serious action.
I have no idea if this was intentional on the part of the moderators, but they aren’t all flying in the same direction. ;-)
Indefinite suspension with leave to seek reinstatement after a stated suitable period would have been far preferable to a 10-year ban. A tenner isn’t necessary to vindicate the moderators’ authority, and the relevant conduct doesn’t give the impression of someone for whom the passage of ten years’ time is necessary before there is a reasonable probability that would they have become a suitable participant during the suspension.
It makes a lot of difference to me that Charles’ behavior was consistently getting better. If someone consistently flouts norms without any improvement, at some point they should be indefinitely banned. This is not the case with Charles. He started off with really high variance and at this point has reached a pretty tolerable amount. He has clearly worked on his actions. The comments he posted while flouting the mods’ authority generally contributed to the conversation. There are other people who have done worse things without action from the mod team. Giving him a 10 year ban without appeal for this feels more motivated by another instance of the mod team asserting their authority and deciding not to deal with messiness someone is causing than a principled decision.
I think this is probably true. I still think that systematically evading a Forum ban is worse behaviour (by which I mean, more lengthy-ban-worthy) than any of his previous transgressions.
I am not personally aware of any, and am sceptical of this claim. Open to being convinced, though.
can you give some examples of this?
Various comments made by this user in multiple posts some time ago, some of which received warnings by mods but nothing beyond that.
Totally unrelated to the core of the matter, but do you intend to turn this into a frontpage post? I’m a bit inclined to say it’d be better for transparency, and to inform others about the bans, and deter potential violators.… but I’m not sure, maybe you have a reason for preferring the shortform (or you’ll publish periodical updates on the frontpage
In other forums and situations, there is a grace period where a user can comment after receiving a very long ban. I think this is a good feature that has several properties with long term value.
These accounts are some of these accounts I created (but not all[1]):
anonymous-for-unimpressive-reasons
making-this-account (this was originally “making this account feels almost as bad as pulling a Holden,” but was edited by the moderators afterwards).
to-be-stuck-inside-of-mobile
worldoptimization-was-based
Here are some highlights of some of the comments made by the accounts, within about a 30 day period.
Pointing out the hollowness of SBF’s business, which then produced a follow up comment, which was widely cited outside the forum, and may have helped generate a media narrative about SBF.
Jabbing at some dismal public statements of Eliezer Yudkowsky’s, and malign dynamics revealed by this episode. (Due to time limitations, I did not elaborate on the moral and intellectual defects of his justifications of keeping FTX funding, which to my amazement and disappointment, got hundreds of upvotes and no substantive dissension).
In a moderate way, exploring (blunting?) Oliver’s ill-advised (destructive?) strategy of radical disclosure.
A post making EAs aware of a major article revealing inside knowledge of SBF within EA, and this post was on a net, a release of tension in the EA community.
Trying to alleviate concerns about CEA’s solvency, and giving information about the nature of control and financing of CEA.
Defending Karnofsky and Moskovitz and making fun of them (this comment was the only comment Moskovitz has responded to in EA history so far).
Discouraging EA forum users from downvoting out of hand or creating blacklists/whitelists of journalists.
My alternate accounts were created successively, as they were successively banned. This was the only reason for subterfuge, which I view as distasteful.
I have information on the methods that the CEA team used to track my accounts (behavioral telemetry, my residential IP). This is not difficult to defeat. Not only did I not evade these methods, but I gave information about my identity several times (resulting in a ban each time). These choices, based on my distaste, is why the CEA team is “99% certain” (and at least, in a mechanical sense) why I have this 10 year ban.
Other accounts not listed, were created or used for purposes that I view as good, and are not relevant to the substance of the comment.
The only warning received on any of my alternate accounts was here:
This was a warning in response to my comment insulting another user. The user being insulted was Charles He.
I believe I am able to defend each of the actions on my previous bans individually (but never have before this). More importantly, I always viewed my behavior as a protest.
At this point, additional discussions are occurring by CEA[1], such as considering my ban from EAG and other EA events. By this, I’ll be joining blacklists of predators and deceivers.
As shown above, my use of alternate accounts did not promote or benefit myself in any way (even setting aside expected moderator action). Others in EA have used sock puppets to try to benefit their orgs, and gone on to be very successful.
Note that the moderator who executed the ban above, is not necessarily involved in any way in further action or policy mentioned in my comments. Four different CEA staff members have reached out or communicated to me in the last 30 days.
Moderation update: We have banned “Richard TK” for 6 months for using a duplicate account to double-vote on the same posts and comments. We’re also banning another account (Anin, now deactivated), which seems to have been used by that same user or by others to amplify those same votes. Please remember that voting with multiple accounts on the same post or comment is very much against Forum norms.
(Please note that this is separate from the incident described here)
We’re issuing [Edit: identifying information redacted] a two-month ban for using multiple accounts to vote on the same posts and comments, and in one instance for commenting in a thread pretending to be two different users. [Edit: the user had a total of 13 double-votes, most far apart and are likely accidental, two upvotes close together on others’ posts (which they claim are accidental as well), but two cases of deliberate self upvote from alternative accounts]
This is against the Forum norms around using multiple accounts. Votes are really important for the Forum: they provide feedback to authors and signal to readers what other users found most valuable, so we need to be particularly strict in discouraging this kind of vote manipulation.
A note on timing: the comment mentioned above is 7 months old but went unnoticed at the time, a report for it came in last week and triggered this investigation.
If [Edit: redacted] thinks that this is not right, he can appeal. As a reminder, bans affect the user, not the account.
[Edit: We have retroactively decided to redact the user’s name from this early message, and are currently rethinking our policies on the matter]
[A moderator had edited this comment to remove identifying information, after a moderation decision to retroactively redact the user’s identification]
I guess it makes sense that people who disagree with the norms are more likely to do underhanded things to violate them.
Just quickly noting that none of the double-votes were on that thread or similar ones, as far as I know.
Do suspended users get a chance to make a public reply to the mod team’s findings? I don’t think that’s always necessary—e.g., we all see the underlying conduct when public incivility happens—but I think it’s usually warranted when the findings imply underhanded behavior (“pretending”) and the underlying facts aren’t publicly observable. There’s an appeal process, but that doesn’t address the public-reputation interests of the suspended person.
It’s kind of jarring to read that someone has been banned for “violating a norm”—that word to me implies that they’re informal agreements between the community. Why not call them “rules”?
pinkfrog (and their associated account) has been banned for 1 month, because they voted multiple times on the same content (with two accounts), including upvoting pinkfrog’s comments with their other account. To be a bit more specific, this happened on one day, and there were 12 cases of double-voting in total (which we’ll remove). This is against our Forum norms on voting and using multiple accounts.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.
Multiple people on the moderation team have conflicts of interest with pinkfrog, so I wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed aren’t aware of pinkfrog’s real identity (they only saw anonymized information).
Have the moderators come to a view on identifying information? is pinkfrog the account with higher karma or more forum activity?
In other cases the identity has been revealed to various degrees:
LukeDing
JamesS
Richard TK (noting that an alt account in this case, Anin, was also named)
[Redacted]
Charles He
philosophytorres (but identified as “Torres” in the moderator post)
It seems inconsistent to have this info public for some, and redacted for others. I do think it is good public service to have this information public, but am primarily pushing here for consistency and some more visibility around existing decisions.
Agree. It seems potentially pretty damaging to people’s reputations to make this information public (and attached to their names); that strikes me as a much bigger penalty than the bans. There should, at a minimum, be a consistent standard, and I’m inclined to think that standard should be having a high bar for releasing identifying information.
I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there’s a case to be made when the information is cherry-picked or biased, or there’s no opportunity to hear a fair response. But goodness, if we’ve learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.
I would guess that most people engage in private behavior that would be reputationally damaging if the internet were to find out about it. Just because something is true doesn’t mean you forfeit your rights to not have that information be made public.
I think people might reasonably (though wrongly) assume that forum mods are not monitoring accounts at this level of granularity, and thus believe that their voting behavior is private. Given this, I think mods should warn before publicly censoring. (Just as it would be better to inform your neighbor that you can see them doing something embarrassing through their window before calling the police or warning other people about then—maybe they just don’t realize you can see, and telling them is all they need to not do the thing anymore, which, after all, is the goal.)
Frankly, I don’t love that mods are monitoring accounts at this level of granularity. (For instance, knowing this would make me less inclined to put remotely sensitive info in a forum dm.)
Writing in a personal capacity; I haven’t run this by other mods.
Hi, just responding to these parts of your comment:
We include some detail on what would lead moderators to look into a user’s voting activity, and what information we have access to, on our “Guide to norms on the Forum” page:
(In addition, note that moderators can’t just go into a user’s account and check their voting history even when we do have reason to look into that user. We require one of the Forum engineers to run some queries on the back end to yield this information.)
Finally, to address your concern about direct messages on the Forum: like a regular user, a moderator cannot see into anyone else’s messages.
Hope this is helpful :)
Also writing in a personal capacity.
Thanks for writing this! To clarify a few points even more:
I confirm this, and just want to highlight that
this is pretty rare; we have a high bar before asking developers to look into patterns
usually, one developer looks into things, and shares anonymized data with moderators, who then decide whether it needs to be investigated more deeply
If so, a subset of moderators gets access to deanonymized data to make a decision and contact/warn/ban the user(s)
On
I confirm this, but I want to highlight that messages on the forum are not end-to-end encrypted and are, by default, sent via email as well (i.e. when you get a message on the forum you also get an email with the message). So forum developers and people who have or will have access to the recipient’s email inbox, or the forum’s email delivery service, can see the messages.
For very private communications, I would recommend using privacy-first end-to-end encrypted platforms like Signal.
Thanks; this is helpful and reassuring, especially re: the DMs. I had read this section of the norms page, and it struck me that the “if we have reason to believe that someone is violating norms around voting” clause was doing a lot of work. I would appreciate more clarification about what would lead mods to believe something like this (and maybe some examples of how you’ve come to have such beliefs). But this is not urgent, and thanks for the clarification you’ve already provided.
Yeah, this is a reasonable thing to ask. So, the “if we have reason to believe that someone is violating norms around voting” clause is intentionally vague, I believe, because if we gave more detail on the kinds of checks/algorithms we have in place for flagging potential violations, then this could help would-be miscreants commit violations that slip past our checks.
(I’m a bit sad that the framing here is adversarial, and that we can’t give users like you more clarification, but I think this state of play is the reality of running an online forum.)
If it helps, though, the bar for looking into a user’s voting history is high. Like, on average I don’t think we do this more than once or twice per month.
Thanks, this is also helpful! One thing to think about (and no need to tell me), is whether making the checks public could effectively disincentivize the bad behavior (like how warnings about speed cameras may as effectively disincentivize speeding as the cameras do themselves). But if there are easy workarounds, I can see why this wouldn’t be viable.
I agree that not all true things should be made public, but I think when it specifically pertains to wrongdoing and someone’s trustworthiness, the public interest can override the right to privacy. If you look into your neighbour’s window and you see them printing counterfeit currency, you go to the police first, rather than giving them an opportunity to simply hide their fraud better.
Maybe the crux is: I think forum users upvoting their own comments is more akin to them Facetuning dating app photos than printing counterfeit currency. Like, this is pretty innocuous behavior and if you just tell people not to do it, they’ll stop.
It seems like we disagree on how bad it is to self-vote (I don’t think it’s anywhere near the level of “actual crime”, but I do think it’s pretty clearly dishonest and unfair, and for such a petty benefit it’s hard for me to feel sympathetic to the temptation).
But I don’t think it’s the central point for me. If you’re simultaneously holding that:
this information isn’t actually a big deal, but
releasing this publically would cause a lot of harm through reputational damage,
then there’s a paternalistic subtext where people can’t be trusted to come to the “right” conclusions from the facts. If this stuff really wasn’t a big deal, then talking about it publically wouldn’t be a big deal either. I don’t think people should be shunned forever and excluded from any future employment because they misused multiple accounts on the forum. I do think they should be a little embarrassed, and I don’t think that moving to protect them from that embarrassment is actually a kindness from a community-wide perspective.
I feel like this is getting really complicated and ultimately my point is very simple: prevent harmful behavior via the least harmful means. If you can get people to not vote for themselves by telling them not to, then just… do that. I have a really hard time imagining that someone who was warned about this would continue to do it; if they did, it would be reasonable to escalate. But if they’re warned and then change their behavior, why do I need to know this happened? I just don’t buy that it reflects some fundamental lack of integrity that we all need to know about (or something like this).
I think that posting that someone is banned and why they were banned is not mainly about punishing them. It’s about helping people understand what the moderation team is doing, how rule-breaking is handled, and why someone no longer has access to the forum. For example, it helps us to understand if the moderation team are acting on inadequate information, or inconsistently between different people. The fact that publishing this information harms people is an unfortunate side effect, after the main effect of improving transparency and keeping people informed.
It doesn’t even really feel right to call them harmed by the publication. If people are harmed by other people knowing they misuse the voting system, I’d say they were mainly harmed by their own misuse of the system, not by someone reporting on it.
Then you needn’t object to the moderation team talking about what they did!
It’s unclear to me that naming names materially advances the first two goals. As to the third, the suspended user could have the option of having their name disclosed. Otherwise, I don’t think we’re entitled to an explanation of why a particular poster isn’t active anymore.
There’s also the interest in deterring everyone else from doing it (general deterrence), not just in getting these specific people to stop doing it (specific deterrence). While I have mixed feelings about publicly naming offenders, the penalty does need to sting enough to make the benefits of the offense not worth the risk of getting caught. A private warning with no real consequences might persuade the person violating the rules not to do it again, but double-voting would surge as people learned you get a freebie.
“double-voting would surge as people learned you get a freebie.”
I just don’t see this happening?
Separately, one objection I have to cracking down hard on self-voting is that I think this is not very harmful relative to other ways in which people don’t vote how they’re “supposed to.” E.g., we know the correlation between upvotes and agree votes is incredibly high, and downvoting something solely because you disagree with it strikes me as more harmful to discourse on the forum than self-voting. I think the reason self-voting gets highlighted isn’t because it’s especially harmful, it’s just because it’s especially catchable.
If the mods want to improve people’s voting behavior on the forum, I both wish they’d target different voting behavior (ie, the agree/upvoting correlation) and use different means to do it (ie, generating reports for people of their own voting correlations, whether they tend to upvote/downvote certain people, etc), rather than naming/shaming people for self-voting.
I think it’s more that upvoting your own posts from an alt is (1) willful, intentional behavior (2) aimed at deceiving the community about the level of support of a comment (3) for the person’s own benefit. Presumably, most people who are doing it are employing some sort of means to evade detection, which adds another layer of deceptiveness. While I don’t like downvoting-for-disagreement and the like either, that kind of behavior presumptively reflects a natural cognitive bias rather than any of the three characteristics listed above. It is for those reasons that—in my view—downvoting-for-disagreement is generally not the proper subject of a sanctioning system,[1] while self-upvoting is.
I’ve suggested to the mods before that sanctions should sometimes be more carefully tailored to the offense, so I’d be open to the view that consequences like permanently denying the violator’s ability to vote and their ability to use alts might be more tailored to the offense than public disclosure. Those are the specific functions which they have demonstrated an inability to handle responsibly. Neither function is so fundamental to the ability to use the Forum that the mods should feel obliged to expend their time deciding if the violator has rehabilitated themselves enough to restore those privileges.
There could be circumstances in which soft-norm violative behavior was so extreme that sanctions should be considered. However, unlike “don’t multi-vote” (which is a bright-line rule for which the violator should be perfectly aware that they are violating the rules), these norms are less clearcut—so privately reaching out to the person would be the appropriate first action in a case like that.
Fair point about reputational harms being worse and possibly too punishing in some cases. I think in terms of a proposed standard it might be worth differentiating (if possible) between e.g. careless errors, or momentary lapses in judgement that were quickly rectified and likely caused no harm in expectation, versus a pattern of dishonest voting intended to mislead the EAF audience, and especially if they or an org that they work for stand to gain from it, or the comments in question are directly harmful to another org. In these latter cases the reputational harm may be more justifiable.
For reasoning transparency / precedent development, it might be worthwhile to address two points:
(1) I seem to remember other multivoting suspensions being much longer than 1 month. I had gotten the impression that the de facto starting point for deliberate multiaccount vote manipulation was ~ six months. Was the length here based on mitigating factors, perhaps the relatively low number of violations and that they occurred on a single day? If the usual sanction is ~ six months, I think it would be good to say that here so newer users understand that multivoting is a really big deal.
(2) Here the public notice names the anon account pinkfrog (which has 3 comments + 50 karma), rather than the user’s non-anon account. The last multi account voting suspension I saw named the user’s primary account, which was their real name. Even though the suspension follows the user, which account is publicly named can have a significant effect on public reputation. How does the mod team decide which user to name in the public notice?
pinkfrog: 1 month (12 cases of double voting)
LukeDing: 6 months (>200 times)
JamesS: indefinite (8 accounts, number not specified)
[Redacted]: 2 months (13 double votes, most are “likely accidental”, two “self upvotes”)
RichardTK: 6 months (number not specified)
Charles He: 10 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
Torres: 20 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
Torres was banned for 20 years according to the link.
Corrected, thanks!
Reply to this comment from @John G. Halstead
(Written in a personal capacity, I did not check this with other moderators)
Thank you for the feedback! I didn’t want to go too off-topic, as this is unrelated to this post, so I’m replying here, but I want to quickly share some factual information for other readers.
You’re writing this in multiple comments. I want to make it clear that moderators did not endorse or “defend” (or symmetrically “attack”) the post as moderators. But of course, we do comment as users on parts we agree or disagree with (like any other user). Let us know if it’s not clear whether we’re commenting as users or as moderators.
As for your other warnings, I want to make sure other readers know that your last warning was not for discussing a specific topic, but for being uncivil and not constructive to the discussion. I agree that the situation in the first warning is less relevant to this case, apologies for bringing it up.
Just a quick note to say that we’ve removed a post sharing a Fermi estimate of the chances that the author finds a partner who matches their preferred characteristics and links to a date-me doc.
The Forum is for discussions about improving the world, and a key norm we highlight is “Stay on topic.” This is not the right space for coordinating dating. (Consider exploring LessWrong, ACX threads/classifieds, or EA-adjacent Facebook/Reddit/Discord groups for discussions that are primarily social.)
We’re not taking any other action about the author, although I’ve asked them to stay on topic in the future.
Moderation update:
Around a month ago, a post about the authorship of Democratising Risk got published. This post got taken down by its author. Before this happened, the moderation team had been deciding what to do with some aspects of the post (and the resulting discussion) that had violated Forum norms. We were pretty confident that we’d end up banning two users for at least a month, so we banned them temporarily while we sorted some things out.
One of these users was Throwaway151. We banned them for posting something a bit misleading (the post seemed to overstate its conclusions based on the little evidence it had, and wasn’t updated very quickly based on clear counter-evidence), and being uncivil in the comments. Their ban has passed, now. As a reminder, bans affect the user, not the account, so any other accounts Throwaway151 operated were also affected. The other user was philosophytorres — see the relevant update.
Quick update: we’ve banned Defacto, who we have strong reason to believe is another sockpuppet account for Charles He. We are extending Charles’s ban to be indefinite (he and others can appeal if they want to).
You can find more on our rules for pseudonymity and multiple accounts here. If you have any questions or concerns about this, please also feel free to reach out to us at forum-moderation@effectivealtruism.org.
We’ve banned Vee from the Forum for 1 year. Their content seems to be primarily or significantly AI-generated,[1] and it’s not clear that they’re using it to share thoughts they endorse and have carefully engaged with. (This had come up before on one of their posts.) Our current policy on AI-generated content makes it clear that we’ll be stricter when moderating AI-generated content. Vee’s content doesn’t meet the standards of the Forum.
If Vee thinks that this is not right, they can appeal. If they come back, we’ll be checking to make sure that their content follows Forum norms. As a reminder, bans affect the user, not the account.
Different detectors for AI content are giving this content different scores, but we think that this is sufficiently likely true to act on.
It’s hard to be certain that something is AI-generated, and I’m not very satisfied with our processes or policies on this front. At the same time, the increase in the number of bots has made dealing with spam or off-topic/troll contributions harder, and I think that waiting for something closer to certainty will have costs that are too high.
Update, we have unbanned Vee. We are new to using AI detection tools and we made a mistake. We apologize.
Moderation update:
I’m indefinitely banning JasMaguire for an extremely racist comment that has since been deleted. We’ll likely revisit and update our forum norms to explicitly discourage this sort of behavior.
Please feel free to get in touch with forum-moderation@effectivealtruism.org if you have any concerns.
Moderation update: We’re issuing dstudiocode a one-month ban for breaking Forum norms in their recent post and subsequent behavior. Specifically:
Posting content that could be interpreted as promoting violence or illegal activities.
The post in question, which asked whether murdering meat-eaters could be considered “ethical,” crosses a line in terms of promoting potential violence.
As a reminder, the ban affects the user, not just the account. During their ban period, the user will not be permitted to rejoin the Forum under another account name. If they return to the Forum after the ban period, we’ll expect a higher standard of norm-following and compliance with moderator instructions.
You can reach out to forum-moderation@effectivealtruism.org with any questions. You can appeal this decision here.
I suggest editing this comment to note the partial reversal on appeal and/or retracting the comment, to avoid the risk of people seeing only it and reading it as vaguely precedential.
Strong +1 to Richard, this seems a clear incorrect moderation call and I encourage you to reverse it.
I’m personally very strongly opposed to killing people because they eat meat, and the general ethos behind that. I don’t feel in the slightest offended or bothered by that post, it’s just one in a string of hypothetical questions, and it clearly is not intended as a call to action or to encourage action.
If the EA Forum isn’t somewhere where you can ask a perfectly legitimate hypothetical question like that, what are we even doing here?
The moderators have reviewed the decision to ban @dstudioscode after users appealed the decision. Tl;dr: We are revoking the ban, and are instead rate-limiting dstudioscode and warning them to avoid posting content that could be perceived as advocating for major harm or illegal activities. The rate limit is due to dstudiocode’s pattern of engagement on the Forum, not simply because of their most recent post—for more on this, see the “third consideration” listed below.
More details:
Three moderators,[1] none of whom was involved in the original decision to ban dstudiocode, discussed this case.
The first consideration was “Does the cited norm make sense?” For reference, the norm cited in the original ban decision was “Materials advocating major harm or illegal activities, or materials that may be easily perceived as such” (under “What we discourage (and may delete or edit out)” in our “Guide to norms on the Forum”). The panel of three unanimously agreed that having some kind of Forum norm in this vein makes sense.
The second consideration was “Does the post that triggered the ban actually break the cited norm?” For reference, the post ended with the question “should murdering a meat eater be considered ‘ethical’?” (Since the post was rejected by moderators, users cannot see it.[2] We regret the confusion caused by us not making this point clearer in the original ban message.)
There was disagreement amongst the moderators involved in the appeal process about whether or not the given post breaks the norm cited above. I personally think that the post is acceptable since it does not constitute a call to action. The other two moderators see the post as breaking the norm; they see the fact that it is “just” a philosophical question as not changing the assessment.[3] (Note: The “meat-eater problem” has been discussed elsewhere on the Forum. Unlike the post in question, in the eyes of the given two moderators, these posts did not break the “advocating for major harm or illegal activities” norm because they framed the question as about whether to donate to save the life of a meat-eating person, rather than as about actively murdering people.)
Amongst the two appeals-panelist moderators who see the post as norm-breaking, there was disagreement about whether the correct response would be a temporary ban or just a warning.
The third consideration was around dstudiocode’s other actions and general standing on the Forum. dstudiocode currently sits at −38 karma following 8 posts and 30 comments. This indicates that their contributions to the discourse have generally not been helpful.[4] Accordingly, all three moderators agreed that we should be more willing to (temporarily) ban dstudiocode for a potential norm violation.
dstudiocode has also tried posting very similar, low-quality (by our lights) content multiple times. The post that triggered the ban was similar to, though more “intense” than, this other post of theirs from five months ago. Additionally, they tried posting similar content through an alt account just before their ban. When a Forum team member asked them about their alt, they appeared to lie.[5] All three moderators agreed that this repeated posting of very similar, low-quality content warrants at least a rate limit (i.e., a cap on how much the user in question can post or comment).[6] (For context, eight months ago, dstudiocode published five posts in an eight-day span, all of which were low quality, in our view. We would like to avoid a repeat of that situation: a rate limit or a ban are the tools we could employ to this end.) Lying about their alt also makes us worried that the user is trying to skirt the rules.
Overall, the appeals panel is revoking dstudiocode’s ban, and is replacing the ban with a warning (instructing them to avoid posting content that could be perceived as advocating for major harm of illegal activities) and a rate limit. dstudiocode will be limited to at most one comment every three days and one post per week for the next three weeks—i.e., until when their original ban would have ended. Moderators will be keeping an eye on their posting, and will remove their posting rights entirely if they continue to publish content that we consider sufficiently low quality or norm-bending.
We would like to thank @richard_ngo and @Neel Nanda for appealing the original decision, as well as @Jason and @dirk for contributing to the discussion. We apologize that the original ban notice was rushed, and failed to lay out all the factors that went into the decision.[7] (Reasoning along the lines of the “third consideration” given above went into the original decision, but we failed to communicate that.)
If anyone has questions or concerns about how we have handled the appeals process, feel free to comment below or reach out.
Technically, two moderators and one moderation advisor. (I write “three moderators” in the main text because that makes referring to them, as I do throughout the text, less cumbersome.)
The three of us discussed whether or not to quote the full version of the post that triggered the ban in this moderator comment, to allow users to see exactly what is being ruled on. By split decision (with me as the dissenting minority), we have decided not to do so: in general, we will probably avoid republishing content that is objectionable enough to get taken down in the first place.
I’m not certain, but my guess is that the disagreement here is related to the high vs. low decoupling spectrum (where high decouplers, like myself, are fine with entertaining philosophical questions like these, whereas low decouplers tend to see such questions as crossing a line).
We don’t see karma as a perfect measure of a user’s value by any means, but we do consider a user’s total karma being negative to be a strong signal that something is awry.
Looking through dstudiocode’s post and comment history, I do think that they are trying to engage in good faith (as opposed to being a troll, say). However, the EA Forum exists for a particular purpose, and has particular standards in place to serve that purpose, and this means that the Forum is not necessarily a good place for everyone who is trying to contribute. (For what it’s worth, I feel a missing mood in writing this.)
In response to our request that they stop publishing similar content from multiple accounts, they said: “Posted from multiple accounts? I feel it is possible that the same post may have been created because maybe the topic is popular?” However, we are >99% confident, based on our usual checks for multiple account use, that the other account that tried to publish this similar content is an alt controlled by them. (They did subsequently stop trying to publish from other accounts.)
We do not have an official policy on rate limits, at present, although we have used rate limits on occasion. We aim to improve our process here. In short, rate limits may be a more appropriate intervention than bans are for users who aren’t clearly breaking norms, but who are nonetheless posting low-quality content or repeatedly testing the edges of the norms.
Notwithstanding the notice we published, which was a mistake, I am not sure if the ban decision itself was a mistake. It turns out that different moderators have different views on the post in question, and I think the difference between the original decision to ban and the present decision to instead warn and rate limit can mostly be chalked up to reasonable disagreement between different moderators. (We are choosing to override the original decision since we spent significantly longer on the review, and we therefore have more confidence in the review decision being “correct”. We put substantial effort into the review because established users, in their appeal, made some points that we felt deserved to be taken seriously. However, this level of effort would not be tenable for most “regular” moderation calls—i.e., those involving unestablished or not-in-great-standing users, like dstudiocode—given the tradeoffs we face.)
Seems reasonable (tbh with that context I’m somewhat OK with the original ban), thanks for clarifying!
I appreciate the thought that went into this. I also think that using rate-limits as a tool, instead of bans, is in general a good idea. I continue to strongly disagree with the decisions on a few points:
I still think including the “materials that may be easily perceived as such” clause has a chilling effect.
I also remember someone’s comment that the things you’re calling “norms” are actually rules, and it’s a little disingenuous to not call them that; I continue to agree with this.
The fact that you’re not even willing to quote the parts of the post that were objectionable feels like an indication of a mindset that I really disagree with. It’s like… treating words as inherently dangerous? Not thinking at all about the use-mention distinction? I mean, here’s a quote from the Hamas charter: “There is no solution for the Palestinian question except through Jihad.” Clearly this is way way more of an incitement to violence than any quote of dstudiocode’s, which you’re apparently not willing to quote. (I am deliberately not expressing any opinion about whether the Hamas quote is correct; I’m just quoting them.) What’s the difference?
“They see the fact that it is “just” a philosophical question as not changing the assessment.” Okay, let me now quote Singer. “Human babies are not born self-aware, or capable of grasping that they exist over time. They are not persons… the life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee.” Will you warn/ban me from the EA forum for quoting Singer, without endorsing that statement? What if I asked, philosophically, “If Singer were right, would it be morally acceptable to kill a baby to save a dog’s life?” I mean, there are whole subfields of ethics based on asking about who you would kill in order to save whom (which is why I’m pushing on this so strongly: the thing you are banning from the forum is one of the key ways people have had philosophical debates over foundational EA ideas). What if I defended Singer’s argument in a post of my own?
As I say this, I feel some kind of twinge of concern that people will find this and use it to attack me, or that crazy people will act badly inspired by my questions. I hypothesize that the moderators are feeling this kind of twinge more generally. I think this is the sort of twinge that should and must be overridden, because listening to it means that your discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest. You can’t figure out true things in that situation.
(On a personal level, I apologize to the moderators for putting them in difficult situations by saying things that are deliberately in the grey areas of their moderation policy. Nevertheless I think it’s important enough that I will continue doing this. EA is not just a group of nerds on the internet any more, it’s a force that shapes the world in a bunch of ways, and so it is crucial that we don’t echo-chamber ourselves into doing crazy stuff (including, or especially, when the crazy stuff matches mainstream consensus). If you would like to warn/ban me, then I would harbor no personal ill-will about it, though of course I will consider that evidence that I and others should be much more wary about the quality of discourse on the forum.)
On point 4:
I’m pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There’s no clear and unbiased way to decide which of those individuals and groups could be the target of “philosophical questions” about the desirability of murdering them and which could not. Unless we’re going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).
And I think drawing the line at we’re not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]
By “discernable people,” I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem).
I am not expressing an opinion about whether there are philosophical points for which murder content actually is important.
Do you think it is acceptable to discuss the death penalty on the forum? Intuitively this seems within scope—historically we have discussed criminal justice reform on the forum, and capital punishment is definitely part of that.
If so, is the distinction state violence vs individual violence? This seems not totally implausible to me, though it does suggest that the offending poster could simply re-word their post to be about state-sanctioned executions and leave the rest of the content untouched.
I’ve weak karma downvoted and disagreed with this, then hit the “insightful” button. Definitely made me think and learn.
I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I don’t think this particular EA forum is the best place for them, for a few reasons.
1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted...
”Online discussions can turn that way with a few misinterpretations creating a doom loop that wouldn’t happen with a handshake and a drink”
Or alternatively perhaps in a more academic/narrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so there’s a very wide range of norms and understandings.
2) There’s potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now I’m strawmanning a bit :D)
3) The forum might be an entry-point for some people into EA things. I don’t think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet.
4) It might be a bit of a strawman to say our “discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest.” I think people hostile to EA don’t like many things said here on the forum, but we aren’t forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
5) I think if you (like Singer) write your own opinion in their own book its a different situation—you are the one writing and take full responsibility for your work—on a public forum it at least feels like there is at least a smidgeon of shared accountability for what is said. Forms of this debate that has been going on for sometime about what is posted on Twitter / Facebook etc.
6) I agree with you the quote from the Hamas charter is more dangerous—and think we shouldn’t be publishing or discussing that on the forum either.
I have great respect for these free speech arguments, and think this is a super hard question where the “best” thing to do might well change a lot over time, but right now I don’t think allowing these discussions and arguments on this particular EA forum will lead to more good in the long run.
Ty for the reply; a jumble of responses below.
You are literally talking about the sort of conversations that created EA. If people don’t have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that we’re broadly right about stuff, and mostly just need to keep our heads down and do the work. I personally think that a version of EA that doesn’t have the ability to course-correct in big ways would be net negative for the world. In general it is not possible to e.g. identify ongoing moral catastrophes when you’re optimizing your main venue of conversations for avoiding seeming weird.
If you’re not able to talk about evil people and their ideologies, then you will not be able to account for them in reasoning about how to steer the world. I think EA is already far too naive about how power dynamics work at large scales, given how much influence we’re wielding; this makes it worse.
Insofar as you’re thinking about this as a question of coalitional politics, I can phrase it in those terms too: the more censorious EA becomes, the more truth-seeking people will disaffiliate from it. Habryka, who was one of the most truth-seeking people involved in EA, has already done so; I wouldn’t say it was directly because of EA not being truth-seeking enough, but I think that was one big issue for him amongst a cluster of related issues. I don’t currently plan to, but I’ve considered the possibility, and the quality of EA’s epistemic norms is one of my major considerations (of course, the forum’s norms are only a small part of that).
However, having said this, I don’t think you should support more open forum norms mostly as a concession to people like me, but rather in order to pursue your own goals more effectively. Movements that aren’t able to challenge foundational assumptions end up like environmentalists: actively harming the causes they’re trying to support.
Just to narrow in on a single point—I have found the ‘EA fundamentally depends on uncomfortable conversations’ point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don’t want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.
When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would’ve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
I’m not keen to take a stance on whether this post should or shouldn’t be allowed on the forum, but I am curious to hear if and where you would draw this line :)
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it’s cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
That’s not the sort of uncomfortable claim you’re worried about
But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but it’s a pretty mainstream position.
I could have made the point in more sensitive ways
Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and it’s hard to think of examples that are compelling without being controversial.
This is not the sort of thing people should be discussing on the forum
But EA as a movement is interested in things like:
Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on)
Promoting women’s rights (especially in the context of global health and extreme poverty reduction)
What factors make what types of foreign aid more or less effective
More generally, the relationship between the developed and the developing world
So this sort of debate does seem pretty relevant.
The important point is that we didn’t know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, it’s something like “the rest of academic philosophy/society at large moderating EA”, which seems much more likely to have stifled early EA’s ability to identify important issues and interventions.
(I also think we’ve ended up at some of the wrong points on some of these issues, but that’s a longer debate.)
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I’m still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread “the meat eater problem”
And as a sidebar yeah I wouldn’t have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) it’s interesting that age of marriage/consent conversations can be where classic left wing cultural relativism and gender safeguarding collide and don’t know which way to swing. We’ve had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you’re saying human lives don’t matter very much!), the ineffectiveness of development aid (controversial: you’re attacking powerful organizations!), transhumanism (controversial, according to the people who say it’s basically eugenics), etc.
Re “conversations can be had in more sensitive ways”, I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think it’s now a bit more mainstream).
Assuming we’re only talking about the post Richard linked (and the user’s one recent comment, which is similar), I agree with this.
This moderation policy seems absurd. The post in question was clearly asking purely hypothetical questions, and wasn’t even advocating for any particular answer to the question. May as well ban users for asking whether it’s moral to push a man off a bridge to stop a trolley, or ban Peter Singer for his thought experiments about infanticide.
Perhaps dstudiocode has misbehaved in other ways, but this announcement focuses on something that should be clearly within the bounds of acceptable discourse. (In particular, the standard of “content that could be interpreted as X” is a very censorious one, since you now need to cater to a wide range of possible interpretations.)
That is not the post in question. We removed the post that prompted the ban.
Ah, thanks, that’s important context—I semi-retract my strongly worded comment above, depending on exactly how bad the removed post was, but can imagine posts in this genre that I think are genuinely bad
Another comment from me:
I don’t like my mod message, and I apologize for it. I was rushed and used some templated language that I knew damn well at the time that I wasn’t excited about putting my name behind. I nevertheless did and bear the responsibility.
That’s all from me for now. The mods who weren’t involved in the original decision will come in and reconsider the ban, pursuant to the appeal.
In the post that prompted the ban, they asked whether murdering meat-eaters could be considered ethical. I don’t want to comment on whether this would be an appropriate topic for a late night philosophy club conversation, it is not an appropriate topic for the EA Forum.
I think speculating about what exactly constitutes the most good is perfectly on-topic. While ‘murdering meat-eaters’ is perhaps an overly direct phrasing (and of course under most ethical frameworks murder raises additional issues as compared to mere inaction or deprioritization), the question of whether the negative utility produced by one marginal person’s worth of factory farming outweighs the positive utility that person experiences—colloquially referred to as the meat-eater problem—is one that has been discussed here a number of times, and that I feel is quite relevant to the question of which interventions should be prioritized.
I’d separate out the removal and the suspension, and dissent only as to the latter.
I get why the mods would feel the need to chart a wide berth around anything that some person could somehow “interpret[] as promoting violence or illegal activities.” Making a rule against brief hypothetical mentions of the possible ethics of murder is defensible, especially in light of certain practical realities.
However, I can’t agree with taking punitive action against a user where the case that they violated the norm is this tenuous and there is a lack of fair prior notice of the mods’ interpretation. For that kind of action, I think the minimum standard would be either clear notice or content that a reasonable person would recognize could reasonably be interpreted as promoting violence. In other words, was the poster negligent in failing to recognize that violence promotion was a reasonable interpretation?
I don’t think the violence-promoting interpretation is a reasonable one here, and it sounds like several other users agree—which I take as evidence of non-negligence.
Here are slides from my “Writing on the Forum” workshop at EAGxBerlin.
If you voted in the Donation Election, how long did it take you? (What did you spend the most time on?)
I’d be really grateful for quick notes. (You can also private message me if you prefer.)
3-4 minutes, mostly on playing through various elimination-order scenarios in my head and trying to ensure that my assigned values would still reflect my preferences in at least more likely scenarios.
took me ~5min
It took me just under 5 minutes.
The percentages I inputted were best guesses based on my qualitative impressions. If I’d been more quantitative about it, then I expect my allocations would have been better—i.e., closer to what I’d endorse on reflection. But I didn’t want to spend long on this, and figured that adding imperfect info to the commons would be better than adding no info.
IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn’t have to spend time learning more or thinking through tradeoffs.
It took me ~1 minute. I already had a favourite candidate so I put all my points towards that. I was half planning to come back and edit to add backup choices but I’ve seen the interim results now so I’m not going to do that.
Probably about 30 minutes of unfocused thought on the actual voting. Mainly it was spent negotiating between what I thought was sort of best and some guilt and status based obligation stuff.
On top of that I perhaps read 2-4 articles and chatted to 1-2 people involved in orgs. I guess that was 1- 3 hours.
I think around 5-10 mins? I tried to compare everything I cared at all about, so I only used multipliers between 0 and 2 (otherwise I would have lost track and ended up with intransitive preferences). The comparison stage took the most time. I edited things in the end a little bit, downgrading some charities to 0.
Tagging posts doesn’t work right now — apologies for the inconvenience! The Forum team is working on it, and I hope we’ll fix it soon.
And it’s fixed! 💜