Why People Use Burner Accounts: A Commentary on Blacklists, the EA “Inner Circle”, and Funding
I wanted to offer a different perspective on this post on why people use burner accounts from someone who has used one previously. This is not intended to be a rebuttal of the arguments made there. It is meant to add to the public discourse and made more sense as a separate post than a comment.
I hope that any upvotes/downvotes are given based on the effort you think I put in offering a different perspective instead of whether you agree/disagree with my comments (NB: I think there should be separate upvote/downvote agree/disagree buttons for forum posts too).
Disclaimer
Note that...
I used a burner account in early 2021 after finding myself unhappy with the EA Community.
I’ve been visiting the EA Forums weekly since and occasionally still going to EAGs.
I work at an EA organisation that receives significant funding from Open Philanthropy.
Reasons for Using Burners
The two biggest reasons for using burners are the potential operation of blacklists and funding. I’ll use them together (with anecdotes and some evidence) to make an overarching point at the end, so read all the way through.
Blacklists
Unfortunately, some EA groups and organisations use blacklists (this seems more common in the Bay Area).
Note that this is difficult to prove as...
I’d have to directly ask them, “Are you using a blacklist? I’ve heard X rumour that seems to suggest this”, and they’re very unlikely to say yes (if they are) as it’s not in their interests.
I don’t want to be seen as a “troublemaker” by suggesting an organisation is using a blacklist when I have a strong reason to believe they are. If they operate a blacklist, I’d likely be blacklisted from events for being a “troublemaker”.
[Anecdote suggesting the existence of a blacklist removed on request]
People have been told that they may have been supposedly blacklisted by organisers for being “epistemically weak” and “not truth-seeking” enough.
These are common terms used amongst Bay Area rationalists and even some funders.
When I searched the term truth-seeking on the EA Forums, I found this comment by a funder who was later asked by the OP of the post what “truth-seeking” meant.
Anecdotally, a friend of mine was rejected by the Open Philanthropy Undergraduate Scholarship with the grantmaker saying they weren’t “truth-seeking enough” as their feedback.
Strong Personal Opinion: I think the problem here is rationality. It provides a camouflage of formalism, dignity, and an intellectual high ground for when you want to be an absolute asshole, justify contrarian views, and quickly dismiss other people’s opinions. By contrarian, as an example, you think diversity is not important (as many Bay Area rationalists do) when most of Western society and the media do think it is important.
Justifying this strong personal opinion with 20+ anecdotes would take a post on its own. I may write this in the future if there’s enough interest, and I can do so without the risk of doxing myself by accidentally de-anonymizing details.
Previously, I’ve pushed some rationalists on why they thought someone wasn’t “truth-seeking” enough or “epistemically weak”. Around half the time, they couldn’t give a clear answer which makes me believe they’re using these as buzzwords to camouflage the fact they don’t like someone for whatever reason.
Funding
Another claim that is hard to prove is that ‘there is/has been an intermingling of funding and romantic relationships’. This becomes more complicated with the prevalence of polyamory.
I realised this to be disturbingly common about two years ago, but I chose not to speak up for fear of being blacklisted.
I’ve been inspired by courageous people making posts on this (with burner accounts). I didn’t have the courage to write a post when I first noticed this two years ago and still didn’t have the courage in the many successive times since.
A rumour about a senior program officer at Open Philanthropy and a grantee in a metamour-relationship has previously been ‘verified’ on the Forums.
There seems to be an inner circle of funders and grantees (predominately in the Bay Area) where the grantees often don’t need to write grant applications and can just ask for money (often retroactively).
Note that this is also hard to prove. I could email Open Philanthropy asking this, but my organisation receives significant funding from them, and I don’t want to be seen as a “troublemaker”. I like to think they don’t operate a blacklist, but even if there’s a 1-20% chance they do, questioning a grant only to be later put on a blacklist with my organisation being defunded is not in my interests, given I have to make a living.
This touches on a wider point about having your entire life: relationships (professional, personal and romantic), professional identity, and personal identity wrapped up in EA with the need to also made a living. When this is true for hundreds of people in a tight-knit community with regular conferences and meetups, it leads to strange dynamics/decision-making.
Therefore, I feel comfortable questioning these grants using burner accounts.
I’ll do some of this now.
Despite Holden pausing “pausing most new longtermist funding commitments” in November 2022 only to later unpause it in January 2023, the Atlas Fellowship (which falls under longtermism) received $1.8 million in December 2022. (Note the use of “most”, this suggests to me that the same rules don’t apply to everyone, as you’ll see below).
I find this problematic as the Atlas Fellowship share the same offices with Open Philanthropy in the Bay Area. The offices are called Constellation.
In the Forum post linked on the word “Constellation”, it says, “Constellation is run by Redwood Research. Aside from the Redwood team, which is about 30 people, they have a coworking space that is more targeted at organisations rather than independent researchers or smaller projects. Thus, Constellation hosts staff from Open Phil, ARC, the FTX Future Fund, CEA, AI Impacts, Atlas Fellowship, MIRI, Lightcone, Alvea, and GCP. Access to Constellation is typically more limited than Lightcone. Currently (as of July 2022) there is no application form, and they are mostly focused on supporting members from the organisations in the space.”
Note that all these organisations (besides the FTX Future Fund) receive significant funding from Open Philanthropy. This includes Redwood Research.
The fact there is no application form seems to add to the cliquey-ness and the fact there may be an “inner circle” of funders and grantees who are held less accountable by each other.
I have visited Constellation before, and all three individuals in question on Rumour 4 on this post currently do work there regularly / have worked in the past there regularly.
I didn’t realise until this post that a Senior Program Officer at Open Philanthropy is married to the CTO of Redwood Research. I find this disturbing as Redwood Research received $10.7m from Open Philanthropy without mention of the grant investigator.
(Encrypted in rot13) Ol Fravbe Cebtenz Bssvpre ng Bcra Cuvynaguebcl, V’z ersreevat gb Pynver Mnory. Ol PGB bs Erqjbbq Erfrnepu, V’z ersreevat gb Ohpx Fpuyrtrevf.
More widely, I’d be interested in how power dynamics work at Open Philanthropy. I imagine even if the grant investigator for Redwood Research is one of the other Program Officers at Open Philanthropy, does the Senior Program Officer (who, remember, is married to the CTO of Redwood Research) have to agree/disagree with the grant amount/decision that their subordinate makes?
I imagine there are power dynamics where a program officer wants a promotion, doesn’t want to risk being fired, or wouldn’t want to disappoint their superior (who is married to the CTO of Redwood Research) by giving her husband a smaller amount than requested.
Things become significantly more complicated once you throw polyamory into the mix.
By “ask for money (often retroactively)”, I am referring to thegrant made to the Future Forum(a conference held Aug 4 − 7, 2022).What is true is that the Future Forum was kicked out of the advertised venue (in theNeogenesis Group House) due to noise complaints from neighbours (an attorney showed up on the driveway and told everyone to leave). The problem was this happened on the Day 1 of the conference, Day 2 was still in the group house, but after Day 2, the volunteers had to work through the night (reportedly with no breaks) to set up a new venue for Day 3 and Day 4.From the volunteers, I’ve heard that the Future Forum’s organisers were so bad that anyone from the CEA Events Team still in the Bay Area post-EAG SF (July 29 − 31, 2022) had to step in to clean up their mess.Cleaning up their mess included getting a new venue last minute (which was very expensive), which took them into large debt, and then, reportedly, being bailed out by Open Philanthropy (retroactively).They could have been bailed out because the organisers were on good terms with the funders.Whilst I can’t verify this, I believe this is true as I’ve seen instances of smaller grants (often in the $ 1000s) where a well-connected grantee will spend the money, go into debt, and ask to be bailed out by a funder instead of applying for a grant to begin with. This is a bad culture.
NB: this grant also doesn’t have a grant investigator listed. I think all OP grants should have their grant investigators listed with an indication of who made the final call and what percentage of time each grantmaker spent on the application (which shouldn’t be hard to do with most time-tracking software).
My Overall Point
Ultimately, my overall point is that one reason for using a burner account (like in my case) is that if you don’t belong to the “inner circle” of funders and grantees, then I believe that different rules apply to you. And if you want to join that inner circle, you need not question grants by directly emailing OP. And once you’re inside the inner circle, but you want to criticise grants, you must use a burner account or risk being de-funded or blacklisted. If you ask why you were blacklisted, you’ll have the reason of “we don’t like you, or you’re a trouble-maker” camouflaged in you being “epistemically weak” or “not truth-seeking enough.”
Edit 1: I edited the beginning of this post as per this comment.
Edit 2: Retracted Future Forum statement because of this comment.
Edit 3: Anecdote removed on request.
- The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts by Mar 9, 2023, 10:53 PM; 63 points) (
- Feb 28, 2023, 4:53 PM; 9 points) 's comment on Consider not sleeping around within the community by (
I’m Isaak, the lead organizer of Future Forum. Specifically addressing the points regarding Future Forum:
I don’t know whether retroactive funding happens in other cases. However, all grants made to Future Forum were committed before the event. The event and the organization received three grants in total:
Applications for the grants were usually sent 1-3 weeks before approval. While we had conversations with funders throughout, all applications went through official routes and application forms.
I received the specific grant application approval emails on:
Feb 28th, 2022, 9:36 AM PT,
July 5th, 2022, 5:04 PM PT,
July 18th, 2022, 11:28 AM PT.
The event ran from August 4-7th. I.e., we never had a grant committed “retroactively”.
Knowing that the event was experimental and that the core team didn’t have much operations experience, we had a budget for “logistical issues/fires” beforehand. This budget covered all of the additional expenses, including CEA’s expenses. We never went into debt or were bailed out by additional grants.
That being said, and beyond the scope of the original post’s concerns: Indeed, we didn’t plan thoughtfully for the neighbors. They were already sensitized because the venue had multiple parties over the weekends before. Thinking that Future Forum was some new party and not knowing it was a relatively quiet conference, they called the police. We were forced to leave the first venue and swap into a new venue by day 3.
CEA stepped in on day 1 and without them, the event would’ve been over on day 2, and day 3 and day 4 of the event wouldn’t have happened — for which we’re incredibly grateful.
I claim responsibility for not having foreseen and planned for this issue, and I also claim the default responsibility for all other operational issues and mistakes.
The points regarding the funding of Future Forum being retroactive are not true. All of this doesn’t reflect on OP’s grantmaking strategies and the other points addressed in the post.
Hi, I’m Leilani. I run the org that was brought on to help with Future Forum in the final weeks leading to the event.
I wanted to verify that no grants were applied for retroactively by Future Forum or Canopy. All funding was approved prior to the event. All OP funding was also received prior to the event. At no point have we been in debt.
Additionally, we are eternally grateful for the CEA events team and our volunteers for all their help. It was a stressful and unexpected situation that we would not have gotten through without them.
Hi, I’m Patrick Finley. I want to chime in here, because I think there’s a number of fairly ridiculous claims below about Future Forum (in addition to the original post). I also think Isaak’s response above is overly generous/conservative, and want to share my opinions on just how far the delta is between this post+comments and reality.
I attended Future Forum and it was easily the best conference/event I’ve been to in Bay Area in the vectors that matter (ie, quality of conversations, people, speakers, new connections, etc) and frankly its not close (I’ve been to EAGs & others). I don’t mean this as a knock to others (the standard at EA&adjacent events seems pretty good), rather this was unusually great.
The negatives I heard during the event and afterwards were about behind the scenes stuff that didn’t seem to affect the actual value to attendees much (including the below). Eg I was at a hotel, and had a nice comfy bus take me to a new venue bc of issues w/ neighborhood. Not sure how this makes the event worse if you’re talking about the purpose of the event vs things that don’t matter for attendees.
The event went from idea to reality in like 3 months. Isaak founded a team, fundraised and ran it in that time. IMO it was the most talent dense event I have seen. Isaak was 20 and had moved to the US like 2 months before founding. If your takeaway is “Isaak & team now have a bad reputation”, I think you need to re-examine your talent/reputation model.
“organizer asking the speakers really dumb and basic questions about their work”—this seems to refer directly to Isaak. I watched most of the talks he mediated, and didn’t find this to be true at all. Perhaps this comes from, for example, experienced AI folks not realizing the questions needed to also cater to people new to AI. Something like ~half of FF attendees weren’t from an “EA” background.
FWIW I know some of the speakers and can personally attest the ones I know claimed to be personally impressed with Isaak. Anyone who’s spoken to Isaak knows he’s well capable of asking the right questions, and the take that he asked a bunch of dumb and basic questions can be debunked pretty quickly by talking to him for ~10m.
(also IMHO this is kind of an aggressive statement from someone who didn’t attend)
Doing great things tends to involve surprises and constant failures along the way, I think some folks below might be miscalibrated on what that looks like. Neighbors started a big fire, and the FF team moved the 300 person event to a big venue in San Francisco over night. When I heard this it was a massive positive update on the future forum team. That is seriously impressive.
The team pulling all-nighters to make this happen is exactly what building great things looks like—I’m confident if you polled the volunteers & core team you’d find a majority answer something like “it was one of the best experiences of my life” (this is a bold claim—fact check me). Note this is different from an unhealthy culture that tries to work people to death constantly for no good reason. It’s a strong signal that much of the team decided to pull an all-nighter to make the conference great. See https://patrickcollison.com/fast
“organizers were so bad, people had to step in to clean up their mess.
To me, event bad = the value was poor. ie, people didn’t show up, bad speakers, waste of time. This did not happen, and doubt almost anyone who attended would feel it did.
Comment below says people stopped showing up after day 1/many people didn’t return bc no value. This is not true, perhaps ask folks who attended. It’s normal for a conference to not have regular full-capacity attendance the whole time, especially when its 4 days. Future Forum seemed full & busy the entire time throughout to me.
Noting the neogenesis house has parties a lot, both before and after Future Forum. Given that data, the neighborhood complaints/police stuff were surprising, and not obvious to predict. Handled shockingly well. (At the time I thought, “eh, I would’ve seen that coming”, now having been to neogenesis many times, I would not have seen that coming—they have large gatherings a lot without issue. I admit the FF fire was surprising in retrospect)
Retroactive funding claim seems to be just false without any supporting logic? Unless I’m missing somewhere that might’ve led one to think there was retroactive funding?
Noting I also think retroactive funding when surprises come up is not necessarily some awful thing. If it had been that FF saw a big prob and desperately needed help, I think it makes total sense to help them (ie I think “let it fail” is a terrible approach) -- this is not the same as making it standard to expect bailouts...
Only gonna slightly chime in on original post’s main point—the whole relationships with funders thing—aside from like sexual relationships and other things mentioned, it seems pretty normal and good to have orgs build relationships with their funders… ie I don’t understand how the Holden<>Atlas thing above is an issue. I’m on board with the whole striving for a good culture thing, but some of these comments to me sound like utopia/not how the world works.
“Much more that went wrong”—just texted ~10 friends who went (many are now close friends I met at FF for the first time) and couldn’t find any other significant complaints, and all agreed venue change didn’t hurt the value to them.
Wrote this bc comments like “X had to step in and clean up their mess” seem incredibly off base. Bringing in help to put out a fire is exactly the right thing to do. Keeping in mind context—first time event, first time organizing team, incredible talent of attendees, incredibly good speakers, etc, its pretty shocking to me anyone’s takeaway could be “wow these guys made a mess and others had to clean it up for them”.
Again, IMO this was the best conference I’ve been to, but even if it weren’t, comparing this as a “mess” to EAG’s that have been going on for a long time (ie tons of experience) is kind of an odd take.
Hi, I’m Zeynep! As someone who volunteered at Future Forum I want to give my piece as well. I completely agree with Patrick’s response below, the conference was exceptionally good and the majority of attendees from my experience feel the same way.
Although things did go wrong, as they easily can with any event, a lot of the volunteers worked very hard to fix the situation. This included staying up all night to fix the venue. I would like to highlight that when we found out we had to switch venues, we were given the option to not take part in moving equipment. Many volunteers agreed to help out regardless, and did not back out, and this in itself shows how much value even we as volunteers felt the event had to give. It may look trivial on paper, but staying up all night to move an entire venue is not for the faint of heart.
Although stress levels were briefly high, everything was handled exceedingly well internally and I was personally told by attendees that they did not feel that the venue change was an issue. As someone who has been to many many many conferences, I would say that the event was a huge success. Everything other than the venue change—from the food to the talks, ran without any hiccups, which is ridiculously impressive when you have witnessed things go very wrong in other events. I firmly think that the entire staff + volunteers deserve a big round of applause.
Suggestion of concensus opinion:
People should not be dating or living with those who work for them. If they start, they should tell HR so they can be transferred.
Dating conflicts of interest: I generally agree, but I’d like to flag that I think there are no cases mentioned in the post where people are romantically involved who work for each other. If people have some recent case. I’m open to being proved wrong (eg find some specific case of a grantmaker writing a grant to a partner). But I just don’t think this happens. Maybe it used to happen in early EA days IDK, but I’d not think this has happened for years, because the reputational harm is great enough that it should incentivize grantmakers to recuse themselves and let an alternate review the application.
Now, rare exceptions:
I disagree that grantmakers who don’t have alternate grantmakers they can call in to determine grants from their managed fund should not be allowed to grant to a partner who applies. This amounts to barring romantic partners of people to apply for some grants. Not only do I find it unethical and somewhat arbitrarily controlling (“you and only you, oh romantic partner, is not allowed to apply even though your project may very well be the best”), but this would systematically disadvantage women applicants as majority of EA grantmakers are straight men. It’s possible this is a moot point basically, as almost every EA grantmaker could recuse themselves and someone else could do the job. But there are the rare tiny funds like Scott Alexander’s regranting program. Should his wife not be allowed to apply?
In cases of startups and small projects, people should be allowed to hire and cofound with their romantic partners, and they can keep those positions for as long as it makes sense to them
Simply living with: I disagree that people who work together or for each other should not be housemates, as a mandatory rule. It should be on a case-by-case basis and I think people’s incentives are generally aligned so they make the useful choice here. So until I heard something weird I wouldn’t even bother looking into it tbh.
Your grant hypo is likely moot because making the grant without the partner recused would probably be unlawful—at least in the US and if a nonprofit were involved. That’s self-dealing and improper inurement (private gain) in my book.
Hm, if you say so. Then I really don’t know what people think is going on with EA funding that the legal system wouldn’t handle. I really have not heard of anything that I’d call actually-bad once you do your own digging.
EAs are really careful about who we give money to. That’s kind of our thing.
That said this is slightly more complex if people have one-off hookups with people. Not really sure what norms should be there.
I think this is generally correct, but not sure “living with” is clear enough, given that I don’t necessarily have any problem with a boss and employee living in the same group house, if they aren’t dating. (I’d imagine there could be some weird power dynamics for votes on house rules, etc. but I don’t think it’s necessarily an ethical problem, and hope that adults could work out how to deal with that.)
[EDIT: it looks like the summary in the post was wrong and there wasn’t any retroactive funding; see Isaak’s comment.]
In your section on asking for money retroactively you mostly rely on an event where you say:
It was organized by one group of organizers.
They messed up pretty badly and the CEA Events team took it over very late in the process (either just before or during, I can’t tell) and worked really hard to keep it going.
OP agreed to fund CEA’s work on this retroactively.
I’m just going on your description, but what would you have liked CEA and/or OP to do differently? Some options, none of which are great:
CEA doesn’t step in. Lots of people have a bad experience at an EA-[EDIT: adjacent?] event.
CEA waits to step in until they have funding formalized. This turns into #1, because they were stepping in at the last minute and there wouldn’t have been time. (Possibly CEA informally confirmed with OP that this was the kind of thing they would likely fund—I don’t know.)
CEA doesn’t ask OP to fund it or OP refuses, and CEA fundraises for it independently. This isn’t terrible, but it’s a complicated situation that requires a lot of trust, exactly the kind of situation where raising from one of your major funders (the only one with a board seat) is a good fit.
I think stuff failing is usually not as bad as it seems, and most of the reputational harm would have fallen on the organisers (who deserved it). On the other hand there are substantial risks to creating a perception that poorly executed things will be bailed out.
So from an outside view (I don’t know much about the event itself) I say they should have let it fail.
(EDIT: to be clear, in light of the new comments about the Future Forum: I think bailouts are often bad, if the Future Forum wasn’t actually bailed out then great. )
But how do you know reputational harm didn’t fall on the organizers? I assume it did tbh, regardless of bailing? Especially if there are blacklists (I’d prefer to call them “logs”), it seems like that is what they’d be for, eg “X screwed up that major thing even though they told us they were totally equipped and experienced in that type of thing”
Sorry, I wasn’t very clear. I agree the organisers suffered reputational damage regardless. I thought that the previous comment was arguing that a reason for bailing out was to prevent _greater _ reputational damage than actually occurred. I was saying that I think the additional damage would mostly have also accrued to the organizers rather than, say, EA as a whole.
Re 1: it wasn’t really an EA-branded event though, I think.
Thanks; edited! Are you saying you like (1) because it wouldn’t actually have been that bad for it to implode?
I don’t have thoughts on that, just being nitpicky since the original framing was “EA-branded event” :)
Correction appreciated!
From the page I linked, it seems OP agreed to recompensate the Future Forum and not CEA (although the Future Forum may have paid CEA for their time).
I have some hindsight bias in hearing the event was a disaster with...
Many people who showing up on Day 1 then not showing up on Day 2, 3, or 4 because it didn’t seem to be a good use of their time.
No forethought about potential noise complaints in a quiet neighbourhood with a nearby country club and stuck-up rich people.
Volunteers being pressured into working through the night to move equipment between venues.
The main organiser asking the speakers really dumb and basic questions about their work with many people in the audience reportedly cringing.
And much more that went wrong.
So I believe it’d be better for organisers to plan more, stick to their budget, and not use their strong grantor connections to get bailed out.
I’m not sure what the best solution is, nor do I want to spend hours thinking about this based on the limited information I have.
My comment addresses strong and unfair connections and using the bailing out of this event in particular (which gives me some reason to believe other projects that have gone into debt have been bailed out because of strong grantor-grantee relationships).
I would have personally filed for bankruptcy and not made use of connections to get bailed out.
Obviously it would have been better if those organizers had planned better. It’s not clear to me that it would have been better for the event to just go down in flames; OP apparently agreed with me, which is why they stepped in with more funding.
I don’t think the Future Forum organizers have particularly strong relationships with OP.
“I would have personally filed for bankruptcy and not made use of connections to get bailed out.”
Well that’s super weird and self-flagellating. Like I’d ask you to please not, but I have a feeling you wouldn’t do that in that situation actually. Like, really, you forgot to do due diligence about noise and you think this one mistake in your career is worth bankruptcy?
For-profit example: Someone is hired to throw a conference on cereals for kellogs and they fuck it up and need a second venue. They call some higherup for budget approval and the answer is “omg no.. fuck well I guess we have to, here’s some money.” The extra money is spent to fix the fuckup as best as possible. The organiser is either fired for using too much money, or they’d get a bad performance review and lose future opportunities (my guess is this happened in the actual EA example to at least some extent, if the person didn’t already say to themselves “dang maybe I should try a different role or something”). If the fuckup is not their fault, maybe they get commended for handling it well, or not, but the company chalks it up to one of those investments where the roll of the dice just came up snake eyes. Unlucky.
I feel like in the for-profit world if you said that you’d take on bankruptcy rather than get the higher-ups involved, they’d be like “dude get a grip, your reaction is way out of scale and that’s kind of a bad standard to set for our employees working under you btw, they’re gonna burn out” and EA should say similar
Agree, although it should be noted that the grantor is not obliged to pick up the overrun in cost.
The event being a disaster doesn’t match my experience of myself attending and talking to other attendees who—on the contrary - all seemed to find it very valuable, too. Also, given the circumstances, the venue swap seemed to have been professionally handled IMO.
data point: I attended, and while I’m glad I did I felt misled by the promotional material. I know of at least two other people who felt the same, and attributed some of the blame to EA as a whole rather than the organizers.
Could you say more about the disconnect between promo and reality?
I went in expecting to be able to find mid-career people to hire, but there weren’t any there. Attendees were either senior people looking to hire (broadly defined), or too junior for all but charitable internships (charitable meaning you don’t expect them to be positive EV for your own company, ever, and offer it strictly as a service to the intern). I like mentoring and would very plausibly have signed up for a mentor mixer type thing, but was much worse at it because I was in a hiring mindset.
As I said, I ended up having good conversations with both ultra-junior and senior people, and if offered the chance to redo I’d still go to the days at the first venue. But I know at least two people who had also come to hire mid-career people and felt bait and switched, and one of those… I forget if they literally used the word “exploited”, but it was at least something close to that.
So you would have avoided paying the venue that rescued you in an emergency? That seems worse than letting open phil give you money.
Or you could have just ended the conference, screwing over people who rearranged their lives and paid for flights for the conference? Even if you think the reputational damage will hit only the organizers and not EA in general, that’s a lot of damage to people who did nothing wrong.
I think there are very reasonable questions to be asked about the organizers and the process that got them funding in the first place. But once they were at day 2 of a conference, OP paying for a second venue seems like the best of a bad set of options.
I absolutely believe my above comment, but am unhappy that it is my only response to this post.
I have a lot of disagreements with this post, both factually and in principle, and I wish it had been written very differently. But my guess is that many people who read this will walk away with a more accurate picture of the world. Not a pareto improvement, they’ll have less accurate impressions of some parts, but it overall represents an improvement, and that’s good. And people can correct the false parts, so the net improvement might be even higher.
I’m not going to deal with the topic of the post, but there’s another reason to not post under a burner account if it can be avoided that I haven’t seen mentioned, which this post indirectly highlights.
When people post under burner accounts, it makes it harder to be confident in the information that the posts contain, because there is ambiguity and it could be the same person repeatedly posting. To give one example (not the only one), if you see X number of burner accounts posting “I observe Y”, then that could mean anywhere from 1 to X observations of Y, and it’s hard to get a sense of the true frequency. This means it undermines the message of those posting, to post under burners, because some of their information will be discounted.
In this post, the poster writes “Therefore, I feel comfortable questioning these grants using burner accounts,” which suggests in fact that they do have multiple burner accounts. I recognize that using the same burner account would, over time, aggregate information that would lead to slightly less anonymity, but again, the tradeoff is that it significantly undermines the signal. I suspect it could lead to a vicious cycle for those posting, if they repeatedly feel like their posts aren’t being taken seriously.
Here’s an example of a past case where a troll (who also trolled other online communities) made up multiple sock-puppet accounts, and assorted lies about sources for various arguments trashing AI safety, e.g. claiming to have been at events they were not and heard bad things, inventing nonexistent experts who supposedly rejected various claims, creating fake testimonials of badness, smearing people who discovered the deception, etc.
One thing I’d like to quickly flag on the topic of this comment: using multiple accounts to express the same opinion (e.g. to create the illusion of multiple independent accounts on this topic) is a (pretty serious) norm violation. You can find the full official norms for using multiple accounts here.
This doesn’t mean that e.g. if you posted something critical of current work on forecasting at some point in your life, you can’t now use an anonymous account to write a detailed criticism of a forecasting-focused organization. But e.g. commenting on the same post/thread with two different accounts is probably quite bad.
I agree with the second paragraph of this comment.
Regarding the third paragraph,
In my specific case,
I acknowledged in the post that I previously used a burner two years ago for whose password I did not save (due it being a burner) and therefore found myself logged out of. I would have used the same burner otherwise.
I could flip this around and say that I don’t have other active burners because fact that my next bullet-point in the post was “I’ll do some of that now” with me proceeding to comment on recent grants in the same post under the same burner instead of making a different post with a different burner.
The use of the plural term “burner accounts” is me talking about burner accounts in the abstract rather than me saying I have multiple burner accounts.
Can you name the prior burner to establish a link?
I just looked up the post I made back then.
My thinking has evolved over the past two years since making the post, and I think it adds little value to this post to establish the link.
I would think differently had I carried on using that burner account, and people could read many posts showing how my thinking has evolved (which would instead turn it into a pseudonym instead of a burner), but a two-year gap doesn’t show any evolution in thought.
Edit: I’m aware this discredits my previous statement of, “I would have used the same burner otherwise”, but that statement was made before I read that previous post.
Understandable, but you should edit your prior comment and put that assertion in strikeout text.
Something I agree strongly on: I think EA is far too casual as a movement about conflicts of interest and mixing romantic and personal relationships. Extremely important executives at influential billion-dollar companies are routinely fired for the kind of relationship that wouldn’t even warrant a mention in EA. We should, as a community, have MUCH stronger guardrails where there is never a question of impropriety in this way.
In the case you cite, they were fired for not disclosing the relationship. My understanding is the way this normally works is that you to tell HR, and then the company figures out how to move people around so that neither is in a position to unfairly affect the other’s work at the company.
(To give a non-central example, when my wife (then-fiancee) and I worked in a kitchen I reported to the head cook even in cases when my wife would normally have been my supervisor.)
Although sometimes mitigating methods aren’t going to be effective. As an extreme example, there is no plan HR could have put in place to green-light encounters between Bill Clinton and a White House intern.
That’s a good point: this doesn’t always work out nicely. Often this means that the more junior person leaves, which disproportionately falls on women.
(My impression is that there’s usually severance and this isn’t considered a negative by others the same way as being fired for an undisclosed inappropriate relationship would be? But it’s still not a good situation.)
Yeah though to my knowledge most heads of EA orgs don’t actually date (right) so this isn’t a problem we actually have. If it’s just inside an org then one or other moving is pretty feasible.
Several people were confused by what I meant here
Senior people at EA orgs certainly date within the community, and I could imagine it being a problem—you can’t transfer away from, say, the head of HR, or the head of operations. But I don’t really know if or how this happens within EA orgs, and think organizations need policies to deal with this.
The problem goes beyond guardrails. Any attempts to reduce these conflicts of interest would have to contend with the extremely insular social scene in Berkeley. Since grantmakers frequently do not interact with many people outside of EA, and everyone in EA might end up applying for a grant from Open Phil, guardrails would significantly disrupt the social lives of grantmakers.
Let’s not forget that you can not just improperly favor romantic partners, but also just friends. The idea of stopping Open Phil from making grants to organizations where employees are close friends with (other) grantmakers is almost laughable because of how insular the social scene is—but that’s not at all normal for a grantmaking organization.
Even if Open Phil grantmakers separated themselves from the rest of the community, anyone who ever wanted to potentially become a grantmaker would have to do so as well because the community is so small. What if you become a grantmaker and your friend or romantic partner ends up applying for a grant?
In addition, many grants are socially evaluated at least partially, in my experience. Grantmakers have sometimes asked me what I think of people applying for grants, for example. This process will obviously favor friends of friends.
As such, the only way to fully remove conflicts of interest is likely to entail significant disruptions to the entire EA social scene (the one that involves everyone living/partying/working with the same very small group of people). I think that would be warranted, but that’s another post and I recognize I haven’t justified it fully here.
These dynamics are one reason (certainly not the only one) why I turned down an offer to be a part time grantmaker, choose not to live in Berkeley, and generally avoid dating within EA. Even if I cannot unilaterally remove these problems, I can avoid being part of them.
To be very clear: I am not saying “this can never be changed.” I am saying that it would require changing the EA social scene—that is, to somehow decentralize it. I am not sure how to do that well (rather than doing it poorly, or doing it in name only). But I increasingly believe it is likely to be necessary.
I appreciate you holding that the Bay Area EAs who “[all live/party/work] with the same very small group of people” should, umm, stop doing that. But until they do, do you think it’s worth it having very poor governance of large sums of money?
“very poor governance”
Flagging that this claim needs backing. How poorly governed are the actual dollars (not the relationships) at the end of the day? You decide
This is fair, though I stand behind my words.
I’m genuinely not understanding this. Do you think only Bay area people can manage large amounts of money well? Or that non EA people won’t manage it well? Or something else?
I’m saying the opposite, that the same small group shouldn’t continue managing everything (and specifically, grantmaking) if they are so prone to conflicts of interest with each other.
Thanks I get the point now.
I respect you for writing this comment.
This would be something I’d be uncomfortable writing under my name.
I have not (yet) known myself to ever be negatively affected for speaking my mind in EA. However, I know others who have. Some possible reasons for the difference:
My fundamental ethical beliefs are pretty similar to the most senior people.
On the EA Forum, I make almost extreme effort to make tight claims and avoid overclaiming (though I don’t always succeed). If I have vibes-based criticisms (I have plenty) I tend to keep them to people I trust.
I “know my audience:” I am good at determining how to say things such that they won’t be received poorly. This doesn’t mean “rhetoric,” it means being aware of the most common ways my audience might misinterpret my words or the intent behind them, and making a conscious effort to clearly avoid those misinterpretations.
Related to the above, I tend to “listen before I speak” in new environments. I avoid making sweeping claims before I know my audience and understand their perspective inside and out.
I’m a techy white man working in AI safety and I’m not a leftist, so I’m less likely to be typed by people as an “outsider.” I suspect this is mostly subconscious, except for the leftist part, where I think there are some community members who will consciously think you are harmful to the epistemic environment if they think you’re a leftist and don’t know much else about you. Sometimes this is in a fair way, and sometimes it’s not.
I’m very junior, but in comparison to even more junior people I have more “f*** you social capital” and “f*** you concrete achievements you cannot ignore”.
Same goes for me, despite not satisfying most of your bullet points, and I often comment with contrarian and controversial views, and am a leftist.
But I think different orgs might have very different approaches here. I took part in a residency and in some other activities organised by Czech EAs, and I made it to advanced stages of the hiring process of Rethink Priorities and some other orgs. I hold all of those in high regard, including those who ultimately rejected me, but there are many others who seem fishy in comparison, and who I can see taking my views as expressed on the forum into account.
I’m a “white” male too though.
I certainly didn’t mean to imply that if you don’t have one of those bullet points, you are going to be “blacklisted” or negatively affected as a result of speaking your mind. They just seemed like contributing factors for me, based on my experience. And yeah, I agree different people evaluate differently.
Thanks for sharing your perspective.
So, to clarify, a guess from an unrelated party about why this talk might have resulted in a lack of an invitation pattern-matched to language used by other people in a way that has no (obvious to me) relationship to blacklists...?
I’m not sure what this was intended to demonstrate.
I am curious how you would distinguish a blacklist from the normal functioning of an organization when making hiring decisions. I guess maybe “a list of names with no details as to why you want to avoid hiring them” passed around between organizations would qualify as the first but not the second? I obviously can’t say with surety that no such thing exists elsewhere, but I would be pretty surprised to learn about any major organizations using one.
I should have made this clearer. My claim (also informed other anecdotes that I should have shared) is that people are put on blacklists for trivial reasons (eg, I don’t like what this person said, they seem too “woke”, they spoke badly about a friend of mine one time) but camouflaged under someone having “weak epistemics” or not being “truth-seeking enough”.
I’m not sure as I haven’t ever made a blacklist or seen other people’s blacklists. A blacklist to me seems something that either has (1) no reason or (2) a very weak reason—maybe that’s camouflaged in something else (perhaps in rationalist language as described in Point #1).
Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical. Which would be bad! Don’t do that! Normally I don’t go around demanding that people must be willing to make personal sacrifices for the greater good if they want to be taken seriously but this is literally a forum for self-declared altruists.
Several times I’ve received lucrative offers and overtures from sources (including one EA fund) that seemed corrupt in ways that resemble how you think your funder is corrupt. Each time my reaction has been “I’ve gotta end this relationship ASAP. This will be used to pressure me into going along with corruption. Better to remove their power over me on my terms.” This was clearly correct in hindsight; it saved me and my team from some entanglements that would have made it harder to pursue our mission, and also it left me free to talk about the bad stuff I saw as much as I want to. While I did pass up a lot of money for myself and my organization, we’re doing fine now. None of this was some crazy-advanced Sun Tzu maneuver; it’s common knowledge that refusing dirty money is the right thing to do but you have to pass up money to do it.
I dunno, a lot of these burner account accusations just strike me as trying to provoke a fight that the poster themselves lacks the courage and conviction to actually participate in, and I have very little patience for “let’s you and him fight”. I assume that the point of posting this stuff is to advocate for some sort of change, but that can’t happen unless specific people lead the charge. And if you’re not willing to bear any costs at all, then why should anyone else pick up your banner? Even if I wanted to, how would I lead the charge against “my friend who I won’t name got the impression that someone else who I won’t name did something bad, based on circumstantial evidence that you can’t check”? Questions of right and wrong aside, this plan just won’t work, you can’t actually lead from the rear like this.
Given your stated beliefs, your moral duty is to either become a “troublemaker” even if the risk to your career is real or else cut yourself off from the dirty money and go do something that’s not compromised. Personally I’ve usually chosen the latter option when I’ve faced similar dilemmas but I have a ton of respect for good-faith troublemakers.
Anonymity is not useful solely for preserving the option to join the critiqued group. It can also help buffer against reprisal from the critiqued group.
See Ben Hoffman on this (a):
“Ayn Rand is the only writer I’ve seen get both these points right jointly:
There’s no benefit to joining the inner ring except discovering that their insinuated benefit does not exist.
Ignoring inner rings is refusing to protect oneself against a dangerous adversary.”
Thanks Sarah, you crystallised a bit of what was floating around in my mind on this topic. This sentance could be considered a bit emotive and persuasive for this forum, but I loved it ;).
”Normally I don’t go around demanding that people must be willing to make personal sacrifices for the greater good if they want to be taken seriously but this is literally a forum for self-declared altruists.”
Just for the record, since I can imagine the comments giving people a vibe that getting retroactive funding is bad: If you run an EA project and for some unexpected reason you go over budget, please do apply for retroactive funding at least from the LTFF. Planning is hard, sometimes things go wrong.
It won’t always make sense to bail you out, but I do actually prefer the world where we fund people enough to cover the 80th percentile of expected cost and then fill you up in the remaining 20% instead of a world where we fund everyone to the 98th percentile of cost and then have people try to give money back to us, or generically overfund a bunch of projects.
Like others commenting, I’m not convinced that the anecdotes here point to blacklists. I will say, if organizations do have blacklists and put people on them for reasons like “they gave a talk I didn’t like”, that’s very bad, and I’m against it.
I do think adjectives like “weak epistemics”, “not truth-seeking”, and “not rational” are often completely contentless and are basically power moves. I basically think there are few contexts when it makes sense to apply these to other people in the community, and if you think there’s something flawed with the way that a person thinks, you should state it more precisely (e.g. “this person seems to make hyperbolic and false claims a lot”, or “they gave a talk and it seemed to be based on vague vibes rather than evidence”, or “they value other things much more highly than utility and I value utility extremely highly, so we don’t agree”.
Sorry Constellation is not a secret exclusive office (I’ve been invited and I’m incredibly miscellaneous and from New Zealand).
It’s a WeWork and from my understanding it doesn’t accept more applications because it’s full.It’s unlikely Claire gave a grant to Buck since (a) like you said this is a well-known relationship (b) the grant-makers for AI safety are 2 people who are not Claire (Asya Berghal and Luke Muelhausser).
From personal experience it’s actually really easy to talk about diversity in EA? I literally chatted with someone who is now my friend when I said I believe in critical race theory and they responded wokism hurt a lot of their friends and now we’re friends and talk about EA and rationalism stuff all the time. I find most rationalist are so isolated away from blue tribe now days that they treat diversity chat with a lot of morbid curiosity and if you can justify you beliefs well.
Blacklists as I understand them have really high bars and are usually used for assault or when a person is a danger to the community. I also think not inviting someone to a rationality retreat cause you don’t want to hang out with them is fine. I would rather die than do circling with someone I disliked tbh (I still don’t know what circling is but I just assume every rationality retreat is read the CFAR handbook or circling at this point).
This breaks the weird default to everything as good faith but this post reads to me as someone who actually isn’t familiar with EA mechanisms and is more grasping at straws due to their anxieties about the EA space and specifically the Bay Area. Many of the details just hit weird trip wires for me that make me think something’s up (e.g. most of the arguments are poorly done when there are stronger empirics around it).
Edit: I confused constellation and lightcone. I still maintain it’s just an office and people need to chill out about the status anxiety of it.
Constellation isn’t located in a WeWork. The Lightcone Offices are located in a WeWork. Constellation doesn’t really have an application process IIRC, the Lightcone Offices accepts applications (though we are also likely to shut down in March).
Oh sorry I thought both were definitely weworks? I’ll edit that in.
The second half of Point 4 seems like a jumble of words I can’t figure out the meaning or relevance of. Am I missing something?
It says, if I understand correctly, that you don’t know what circling is, you would rather die than do it with people you dislike, that’s why people shouldn’t be invited to rationality retreats, and you don’t know what rationality retreats involve, but (you assume?) they involve circling.
Yeah so I should have written this clearer. It’s making a few claims:
1. Rationalist retreat type things often require a level of intimacy and trust that means it’s probably ok for them to be more sensitive and have a lower bar for inviting people.
2. Often a lot of young EAs have status anxiety about being invited to things of actual low importance for their impact (e.g. circling). I’m signalling that these are overblown and these social activities are often overestimated in their enjoyment and status.
People who are agreement-downvoting this: if you don’t agree with part of the comment, please write a reply explaining what you disagree with before downvoting. I see this has many downvotes but I can’t tell what part everyone is objecting to.
I think you’re conflating Lightcone Office (in WeWork, does have applications but I think did pause at one point because of space issues, run by LessWrong/Lightcone team, houses mostly lesser known orgs and independents) with Constellation (a few blocks away, run by Redwood Research, houses several major orgs)
On point 2, I would probably argue that philanthropic agencies shouldn’t give grants to an org which their partner is involved with, regardless of if they are involved in the grant or not. I have been surprised to read a number of posts where this seems to be happening.
This might seem harsh, but there is value in being squeaky clean and leaving yourself above reproach. Other funding agencies can come in if the work is valuable enough.
I don’t think this is standard anywhere for grantors, but I was unsure, so I checked a few. Carnegie, Gates, and the National Council of Nonprofits guidance. All three require disclosure, some cases require recusal, none of the three ban funding.
Does Open Philanthropy have a public document like this?
I hope it at least exists internally but I think they should follow the example of other well-established organisations like Gates to make it public, especially given the prevalence of polyamory in this community and the “insularity” of the Bay Area as described in this comment.
Edit: I believe this was important enough to turn into a separate post.
There is this, but I agree it would be good if there was one that were substantially more detailed in describing the process.
(You are probably getting downvotes because you brought up polyamory without being very specific about describing exactly how you think it relates to why Open Phil should have a public COI policy. People are sensitive about the topic, because it personally relates to them and is sometimes conflated with things it shouldn’t be conflated with. Regardless, it doesn’t seem relevant to your actual point, which is just that there should be a public document.)
Not sure if it’s public, but this indicates it exists.
This seems pretty hard to put into practice. Let’s say TLA gets most of it’s funding from OP. TLA is considering hiring someone: should they ask “are you romantically involved with any of these 80 people” as part of their decision to hire, and weigh employing this particular person against the difficulty of making up the funding shortfall? Or after hiring someone should TLA then ask the question, and just accept losing most of their funding if the answer is yes? Should OP be doing the same? (“Are you romantically involved with any of these ~10,000 people at these organizations we have an ongoing relationship with?”)
The particular situation you’re talking about is with a relatively senior person at OP, but I think not incredibly so? They’re one of 27 /80 people who either have “senior” in their title or are co-CEO/President. The person at the grantee org looks to be much more senior, probably the #2 or #3 person at the org. A version of your proposal that only included C-level people at the grantee org and people working on the relevant area (or people who directly or indirectly supervise people who do) at the granting org would make a lot more sense, though I’m not sure it’s a good idea.
(I do think you should have COI policies, but recusal at the granting organization is the standard way to do it outside EA, and I think is pretty reasonable for EA as well.)
Thanks Jeff you’ve convinced me that a zero relationship policy woyldnt work. I think I didn’t grasp the scale of these orgs and just how unrealistic this might be to avoid romantic entanglement at all levels.
I think something asking the lines of your steelmanning me here might ensure an extremely low chance of relationship bias affecting grants.
“A version of your proposal that only included C-level people at the grantee org and people working on the relevant area (or people who directly or indirectly supervise people who do) at the granting org would make a lot more sense, though I’m not sure it’s a good idea.”
I know other orgs operate through Recusal at the granting org, but romantic bias can still get legs in the donors door, and people may well still struggle to vote against someone’s partner out of loyalty. Recusal helps and I’m sure it’s happening already, but it doesn’t seem good enough in some situations
Thanks for the engagement.
I appreciate where the sentiment is coming from (and I’d personally be in favour of stronger COI norms than a lot of EA funders seem to have) but the impact cost of this seems too high as stated.
There’s value is being squeaky clean, but there’s also value in funding impactful projects, and I think having COI policies apply across the whole large org (“If anyone from our org is dating anyone from your org, we can’t fund you”) will end up costing way more value than it gains.
That’s a strong argument thanks Will. It’s an interesting question which has more value—being squeaky clean or having some projects perhaps being underfunded. I would hope though that it wouldn’t necessarily cost as much as we might think, if other funders could cover shortfalls. OpenPhil isn’t the only donor fish in the sea, although they are perhaps the only Leviathan for some EA related orgs.
Perhaps this is also part of the argument in favour of having more, slightly smaller funders rather than a few huge ones. To help avoid COI
Although I didn’t say it as I was going for the “squeaky clean” argument, but you could also potentially draw a line at no funding to orgs where there are relationships between those at some kind of leadership/decision making level. This wouldn’t be squeaky clean, but cleaner at least.
I’ve heard others suggest this, but don’t know what it means. Do you think Dustin should give half his money to someone else? Or that he should fund two independent grantor organizations to duplicate efforts instead of having only one? Or just that we want more EA billionaires?
I feel NickLaing is encoding an implicit graph-theoretic belief that may not be factually accurate. The premise is that CoI opportunities fall with decentralization, but it may be the case that more diffuseness actually lead to problematic intermingling. I don’t have super good graph theory intuitions so I’m not making a claim about whether this is true, just that it’s a premise and that the truth value matters.
My graph-theoretic intuition is that it depends a lot of the distribution of opportunities. Because EAs tend to both fund and date other EAs, the COI increase / decrease probably depends to some extent on the relative size of the opportunity / recipient network.
My premise may well be wrong, but all I have heard to date is that the conflicts of interests aren’t that big a problem, not a clear argument that more diffuseness could make COI worse.
if we take an imaginary world where there is only one donor org and many donee organisations, within a small community like EA it seems almost impossible to avoid conflicts of interest in a high proportion of grants.
But I have low confidence in this, and would appreciate someone explaining arguments in favor of centralisation reducing potential for COI
I think Nick is suggesting that if we had Open Phil split into funders A and B (which were smaller than Open Phil), then A declining to fund an organization due to a COI concern would be somewhat less problematic because it could go to B instead. I’m not a graph theory person either, but it seems the risk of both A and B being conflicted out is lower.
I don’t think that’s a good reason to split Open Phil, although I do think some conflicts are so strong that Open Phil should forward those organizations to external reviewers for determination. For example, I think a strong conflict disqualifies all the subordinates of the disqualified person as well—eg I wouldn’t think it appropriate to evaluate the grant proposal of a family member of anyone in my chain of command.
Correct: a treatment of this question that does not consider BATNAs or counterfactuals would be inaccurate.
Thanks David. I think any of those 3 might work I also didn’t know just how much money Dustin was giving until I looked it up now. Great stuff!
This wasn’t clear, but I don’t think any of the three make sense as solutions. You can’t really tell donors that they don’t get to make decisions about what to fund, having multiple orgs creates duplication and overlap which is really costly and wasteful—and if the organizations coordinate you haven’t really helped, and lastly, sure, I’d love for there to be more donors, but it’s not really actionable, other than to tell more EAs to make lots of money. (And they probably should. EA was, is, and will be funding constrained.)
I think the first 2 options make some sense and I don’t think the donor diversity question is simple, with simple answers
On the first option, of course people can give money where they want, but I think any smart big donor could respond to a good argument for diversification of giving. It’s not about telling anyone to do anything, but about figuring out what is actually the best way to do philanthropy in the EA community in the long term.
I don’t think having multiple orgs are necessarily costly and wasteful. Even if donors co-ordinate to some degree, having a donor board with no major COI could make more uncompromised and rational decisions, and also avoid controversy both from within and outside the movement..
Charity entrepreneurship have invested in foundation entrepreneurship, and make a number of good arguments why it can good to have more funding orgs out there, even if smaller. These benefits includemore exploration of different cause areas and potential access to different pools of funding and different donors.
As a side note (although I know it wasn’t intentional) don’t think it’s a great conversation technique on a forum to suggest 3 possible solutions which seemed to be in good faith, and then turn around and say that they don’t make sense in the next comment. This would work in an in person discussion I think, but it makes it hard to have a discussion on a forum.
I’ve retracted the original comment as it’s clear to me now that it doesn’t make practical or ethical sense to completely rule out grants to orgs where there is partner entanglement. I still think it’s an important discussion though!
I’m concerned this post describes itself as a response to another post, but doesn’t actually address the arguments made in that discussion.
Instead it reads like a continuation of posts by other burner accounts.
For this reason, I downvoted.
It’s also full of insinuation and implication and “X may mean Y [which is damning]” in a way that’s attempting to get the benefit of “X = Y” without having to actually demonstrate it.
In my opinion, “you have to use a burner account to put forth this kind of ‘thinking’ and ‘reasoning’ and ‘argument’” is actually a point in EA culture’s favor.
I’m not describing this post as a response to the other post.
I initially wrote at the top of this post that,
and I believe that this warranted a separate post. In no way was this meant to be a rebuttal of all the arguments made there. I apologise for the confusion.
At the top of the post, I’ve now clarified that I’m offering a different perspective (to add to the public discourse) and that this made more sense to me as a post than a comment and NOT a rebuttal.
On the Future Forum stuff, it seems worth noting that Sam Altman was literally speaking on the morning of the new venue (I was volunteering, though I didn’t help through the night as many did). Feels it was a reasonably important morning to go well.
I don’t quite know what blacklists means but I imagine that people make judgements based on backchannel information all the time. If that’s what the poster means then yeah I’d imagine this happens in lots of little ways (60%) though I’m not sure if disputing grants is a negative signal, though maybe that’s easy for me to say as someone who seems to be disagreeable in an acceptable way.
lol when people use this burner account, it’s usually closer to “this argument could get a bit annoying” than “I feel the need to protect my anonymity for fear of retribution.” please don’t speak for all burners
Naaah that’s what main is for :P
Your patience is admirable :)
I’ve been trying it out for a while now before the posts on why there are burner accounts on the forum during the last few days, though I’ve been trying out this thing where I’ve posted lots of stuff most effective altruists may not be willing to post. I cover some of the how and why that is here.
https://forum.effectivealtruism.org/posts/KfwFDkfQFQ4kAurwH/evan_gaensbauer-s-shortform?commentId=EjhBquXGiFEppdecY
I’ve been meaning to ask if there is anything anyone else thinks I should do with that, so I’m open to suggestions.
Of your two concerns regarding community gatekeeping, I’m not concerned about funding because I think good projects still get funded. BUT I’m semi-concerned about “logs” of things or rumors. I don’t think there are outright “blacklists” unless we are talking about banning names from EA events due to harrassment, smear journalism, etc. However of “logs” I am concerned about, I am not concerned about the EA forum if you are actually being rigorous or something which is totally in everyone’s control.
The type of logs or rumors I am concerned about are well-summed in this quote of yours:
“The individual asked the organiser why they weren’t invited when everyone else in the same position was invited. The organiser ignored their questions.”
I find this concerning and went through something similar myself. I think lack of transparency of reasoning or feedback in personal cases is too common and we should continue to push back against it. It’s the fact that a person can’t respond to soothe criticism etc that makes it much worse. And one has to wonder if information cascades are happening there. That said of course there is always the chance that leaving the person out was the right decision (these retreat spots cost money after all, and every attendee shapes the culture, no one is owed one), but reason should be given since the rest of the team attended.
You claim that
blacklistsgatekeeping to the extent it occurs is due to rationality culture, but FWIW I disagree, as I think traditional LW style rationality is very pro-transparency. I think lack of transparency about why someone is left out of things is usually due to typical human stuff like avoidance, fear, laziness, fatigue, overwhelm, confusion over what words to use, lack of time to answer well, frustration at being the one slated to answer over email for decisions which were made jointly thereby needing to pass an email around and around to be sure one gets phrasing right, etc. As the saying goes: “Don’t first assume malicious intent when incompetence will do.” But this does cause real issues. But again I don’t think this relates to EA forum comments as much as social rumors and one’s professional history.I guess I’ll also say, in general: It is odd and maybe even slightly-hypocritical to request more accountability and professionalism for grantmakers and EA leaders but less for commentors by being so bullish on anon posting. If the EA movement and social scene were one gigantic company, only those of us with the most serious well-researched complaints would be able to say anything anonymously[1] and be taken very seriously. No more than you’d take an anonymous email sent to you seriously, when compared to the reports of your coworkers who all use work emails and real names.
So I hope you can see that over-professionalizing the EA world may not leave a cultural space for anonymous accounts to actually have their concerns taken seriously by actual EA leadership, unless that anon is writing a thing which is exceptionally well written and researched. IDK just be careful what you wish for here, and be careful to not prove too much
except to HR, which in EA is kinda like the Community Health Team, and yes you can speak with them and your complaint be logged anonymously
It seems worth acknowledging a couple of points that are in tension:
Asking difficult questions probably does affect your ability to get jobs and grants. It’s not obvious to me that this is always negative, but it does take a lot more energy to frame negative comments in a way that they aren’t taken wrongly. Sometimes being anonymous is just easier
EA as a community could be better at providing ways to gather around useful critiques. The fact that all have to be posts seems non-ideal. If people could comment on and agree/disagree on grants all listen on the forum (for instance) that might be better (or worse).
EA as a community is unusually open with its grants and pretty unpolitical in hiring. I would guess that most funders do not publish easily searchable lists of their grants. I think many communties have huge amounts of backchannels and politics about who gets the good jobs. I think we are doing well on the former and okay on the latter.
We could do better.