I know multiple victims/survivors/whatever who were interviewed by TIME, not only one of the named individuals but some of the anonymous interviewees as well.
The first time I cried because of everything that has happened in EA during the last few months was when I learned for the fifth or sixth time that some of my closer friends in EA lost everything because of the FTX collapse.
After the collapse of FTX, any predictions that the effective altruism movement will die with it are greatly exaggerated. Effective altruism will change that maybe none of us ourselves can even predict but it won’t die.
There are countless haters of so many movements who on the internet will themselves into believing what will happen to that movement when it fails is what they wish will happen. I.e., that the movement will die. Sensationalist polemicists and internet trolls don’t understand history or the world enough to know what they’re talking about when they celebrate the gleeful end of whatever cultural forces they hate.
This isn’t just true for effective altruism. This is true for every such movement towards which anyone takes such a shallow interpretation. If movements like socialism, communism, and fascism can make a worldwide comeback in the 2010s and 2020s in spite of their histories, effective altruism isn’t going to just up and die, not by a longshot.
Small movements (like species with few members, I think[1]) die more quickly, as do younger movements.
Also EA seems to have a quite specific type of person it appeals to & a stronger dependence on current intellectual strands (it did not develop separately in China & the Anglosphere and continental Europe), which seems narrower than socialism/communism/reactionary thought.
I think it’s good to worry about EA disappearing or failing in other ways (becoming a cargo-cult shell of its original form, mixing up instrumental and terminal goals, stagnating & disappearing like general semantics &c).
Events of the last few months have shown that in the last few years many whistleblowers weren’t taken seriously enough. If they had been, a lot of problems in EA that have come to pass might have been avoided or prevented entirely. They at least could have been resolved much sooner and before the damage became so great.
As much as more effective altruists have come to recognize this in the last year, one case I think deserves to be revisited but hasn’t been is this review of problems in EA and related research communities originally written by Simon Knutsson in 2019, based on his own experiences working in the field.
I’d be curious about more concretization on this, if possible. I don’t think my current model is that “whistleblowers weren’t taken seriously enough” is the reason a bunch of bad stuff happened here, but there’s something that rhymes with that that I maybe do agree with.
I wrote my other reply yesterday from my smartphone and it was hard to tell which one of my short form posts you were replying to, so I thought it was a different one and that’s why my comment from yesterday may not have seemed so relevant. I’m sorry for any confusion.
Anyway, why I’m posting short forms like this too is that they’re thoughts on my mind I want to express for at least some effective altruists to notice, though I’m not prepared right now to contend with the feedback and potential controversy that makings these as top level posts would provoke right now.
It’s long enough to be a top level post, though I didn’t have time around the days these thoughts were on my mind to flesh it out more, with links or more details, or time to address what I’m sure would be a lot of good questions I’d receive. I wouldn’t want to post before it could be of better quality.
I’ve started using my short form to draft stubs or snippets of top level posts. I’d appreciate any comments or feedback on them encouraging me to turn them it top level posts, or, alternatively, feedback even discouraging me from turning them into a top level post if someone would think it’s worthwhile.
This is a section of a EAF post I’ve begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn’t intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I’ve still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.
@JWS 🔸 self-describes as “anti-Bay Area EA.” I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn’t limited to the Bay Area. It’s bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically “Bay Area EA” culture entails the stereotypes-both accurate and misguided—of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.
Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn’t only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn’t @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn’t anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).
In short, EA is an Anglo-American movement and philosophy, if it’s going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself “pro-” or “anti-” Bay Area EA is pointless.
Maybe it’s worth pointing out that Bostrom, Sandberg, and Yudkowsky were all in the same extropian listserv together (the one from the infamous racist email), and have been collaborating with each other for decades. So maybe it’s not precisely a geographic distinction, but there is a very tiny cultural one.
I’m an anti-Bay Area EA because from the stuff I’ve read about sexual harassment in EA it’s like 90% Bay Area EAs doing it, and seems to be enabled by specific subcultural factors there that I don’t see anywhere else in the movement.
I`m guessing this is going to be a controversial post, though I was satisfied when like 10 minutes ago it had net zero karma because I wanted to make a screen cap for a Thanos “perfectly balanced, as all things should be” meme. This isn’t to say that whoever sees this post and feels like voting on it should try upvoting or downvoting it to try getting it to exactly zero karma. That would probably be futile because someone will in short order probably upvote or downvote it to some non-zero value. I just making this extra comment to vent about the fact how frustrating it is that I’ve waited over a year for one of the takes I drop on the EA Forum to have exactly zero karma so I can make a hella dope Thanos EA meme.
Any formal conflict of interest I ever had in effective altruism I shed myself of almost five years ago. I’ve been a local and online group organizer in EA for a decade, so I’ve got lots of personal friends who work at or with support from EA-affilated organizations. Those might be called more informal conflicts of interest, though I don’t know how much they might count as conflicts of interest at all.
I haven’t had any greater social conflicts of interest, like being in a romantic relationship with anyone else in EA, for that long as well.
I’ve never signed a non-disclosure agreement for any EA-affiliated organization I might have had a role at or contracted with for any period of time. Most of what I’m referring to here is nothing that should worry anyone who is aware of the specific details of my personal history in effective altruism. My having dated someone for a few months who wasn’t a public figure or a staffer at any EA-affiliated organization, or me having been a board member in name only for a few months to help get off the ground a budding EA organization that has now been defunct for years anyway, are of almost no relevance or significance to anything happening in EA in 2023.
In 2018, I was a recipient of an Effective Altruism Grant, one of the kinds of alternative funding programs administered by the Centre for Effective Altruism (CEA), like the current Effective Altruism Funds or the Community Building Grants program, though the EA Grants program was discontinued a few years ago.
I was also contracted for a couple months in 2018 with the organization then known as the Effective Altruism Foundation, as a part-time researcher for one of the EA Foundation’s projects, the Foundational Research Institute (FRI), which has for a few years now been succeeded by a newer effort launched by many of the same effective altruists who operated FRI, called the Center for Long-Term Risk (CLTR).
Most of what I intend to focus on posting about on this forum in the coming months won’t be at all about CLTR as it exists today or its background, though there will be some. Much of what I intend to write will technically entail referencing some of the CEA’s various activities, past and present, though that’s almost impossible to avoid when trying to address the dynamics of the effective altruism community as a whole anyway. Most of what I intend to write that will touch upon the CEA will have nothing to do with my past conflict of interest of having been a grant recipient in 2018.
Much of the above is technically me doing due diligence, though that’s not my reason for writing this post.
I’m writing this post because everyone else should understand that I indeed have zero conflicts of interest, that I’ve never signed a non-disclosure agreement, and that for years and still into the present, I’ve had no active desire to work up to netting a job or career within most facets of EA.
(Note, Jan. 17: Some of that could change but I don’t expect any of it to change for at least the next year.)
People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the “government of Effective Altruism” but then they kept acting exactly like they were the Government of EA for years and years.
Yet that’s wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX.
I’ve put everything above in the past tense because that was the state of things before 2022. There’s no such thing as a “government of effective altruism” anymore, regardless of whether anyone wants one or not. Neither the CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not.
We can’t go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It’s just us. It’s just you. It’s just me. It’s just left to everyone who is still sticking around in this movement together. We only have each other.
I can’t overstate how much the UX and UI for the EA Forum on mobile sucks. It sucks so much. I know the Online Team at the CEA is endlessly busy and I don’t blame anyone for this as their fault, though the UX/UI on mobile for the EA Forum is abysmal.
It should be noted that for most of the period that the Centre for Effective Altruism itself admits and acknowledges as its longest continuous period of a pattern of mistakes from 2016-2020, according to the Mistakes page on the CEA’s website, two of the only three members of the board of directors were Nick Beckstead, Toby Ord and William MacAskill.
(Note, January 15th: as I’m initially writing this and as of right now, I want to be clear and correct about this enough that I’ll be running it by someone from the CEA. If someone from CEA reads this before I contact any of you, please feel free to either reply here or send me a private message for any mistakes/errors I’ve made here.)
(Note, Jan. 16th: I previously stated that Holden Karnofsky was a board member, not Toby. I also stated that this was the board of the CEA in the UK, that was my mistake. I’ve now been corrected by a staffer at the CEA, as I mentioned before that I’d be in contact with. I apologize for my previous errors.)
I’ll probably make a link post with a proper summary later but here is a follow-up from Simon Knutsson on recent events related to longtermism and the EA school of thought out of Oxford.
The FTX bankruptcy broke something in the heart effective altruism but in the process, I’m astonished with how dank it has become. This community was never supposed to be this dank and has never been danker. I never would’ve expected this. It’s absurd.
I’ve concluded that Dustin Moskowitz shouldn’t go full Dark Brandon after all. It’d not just be suboptimal. It’d be too risky and could backfire. I don’t know at what point it’d happen specifically, though at some point there’d be diminishing marginal returns to Dustin adopting more of Dark Brandon-esque personal style. In hindsight, I should’ve applied the classic too for so much effective altruism, thinking on the margin, to the question: what is the optimal amount of Dark Brandon Dustin Moskowitz should embrace?
Dustin leaning in a more Dark-Brandon-esque direction couldn’t totally solve any problems EA faces. There are some kinds of problems Dustin doing so couldn’t solve. It could ameliorate the severity of some problems, in particular some image problems EA has.
For those who don’t know at all what I’m getting at, I’m thinking about how Dustin Moskowitz might tweak his public image or personal brand than to improve upon its decent standing right now. Dustin is not the subject of as many conspiracy theories as many other billionaires and philanthropists, especially as one who had his start on Silicon Valley. He’s not the butt of as many jokes as Mark Zuckerberg or Jeff Bezos about how he’s a robot or an alien. If you asked a socialist, or someone who just hates billionaires for whatever reason, to make a list of the ten worst billionaires they hate the most, Dustin Moskowitz is one name that would almost certainly not make it onto the list.
The downside risk of Dustin becoming a more controversial or bold personality gets at the value he provides to EA by being the opposite. That he has been a quieter philanthropist has caused him not to be seen nearly as much as the poster boy for EA as a movement. Hypothetically, for the sake of argument, if Asana went bankrupt for some reason, that would not be nearly as bad for EA as the collapse of FTX was. Dustin not feuding with so many people like Elon Musk has means he doesn’t have nearly as many enemies. That means the EA community overall has far fewer enemies. It’s less hated. It’s not as polarized or politicized. These are all very good things. Much of that is thanks to Dustin being more normal and less eccentric, less volatile and more predictable, and more of a private person than blowhard.
For years now, much ink has been spilled about the promise and peril of the portents, for effective altruism, of dank memes. Many have singled me out as the person best suited to speak to this controversy. I’ve heard, listened, and taken all such sentiments to heart. This is the year I’ve opted to finally to complete a full-spectrum analysis of the role of dank memes as a primary form of outreach and community-building.
This won’t be a set of shitposts on other social media websites. This will be a sober evaluation of dank EA memes, composed of at least one post, if not a series of multiple posts, on the EA Forum. They may be so serious that few, if any, memes will be featured at all. It is time for the dank EA memes to come home.
The age of dankness in effective altruism is upon us. It’s unstoppable. I don’t mean this as the go-to administrator of the Dank EA Memes Facebook group, or as one of the more reputed online memelords/shitposters in the EA movement.
I don’t just mean that as someone other effective altruists turn to for answers in the face of the onslaught of dank memes becoming a primary vector/medium shaping public perception of EA, both within and outside of the effective altruism community itself, for better or worse. I emphasize this as the go-to person other effective altruists turn to as potentially capable of using my influence to reign in the excessive or adverse impact of dank memes on EA. I don’t have that capability.
I can’t reign in dank memes in EA or do whatever anyone else might want me to do. They’re beyond me or any other mods/admins of the Dank EA Memes Facebook group.
The trend is unstoppable. It’s inevitable. The future of EA is dank.
To emphasize just how final this reality is, here is a very incomplete list of leaders in EA who I’m aware have embraced this uncanny reality
Dustin Moskowitz
Holden Karnofsky
Eliezer Yudkowsky
Ben West
Robert Wiblin
Kudos to Rob and Eliezer as early adopters of this trend.
While I think it’s technically unlikely, I still feel like there is a significant chance most of what’s wrong in effective altruism can be made right if Dustin Moskowitz goes full Dark Brandon.
Can you be a bit more precise for what you mean? Even though I’m well aware of the Dark Brandon meme, I still don’t know for sure what you’re referring to.
tl;dr Dustin Moskowitz taking further his use of dank memes to make himself and EA more relatable, by becoming a notorious edgelord, like Elon Musk, or Biden, in the form of Dark Brandon, was something I thought could potentially be a good idea. I’ve since concluded it’s unnecessary, especially given a downside risk like how doing so might inadvertently cause a toxic cult of personality around an unwitting Dustin.
Sometimes I use shortform posts as a notepad for unrefined thoughts that I might refine later. I posted the last couple on mobile which maybe has a default tag of ‘frontpage’ that I didn’t notice. Those shortform posts wound up on the frontpage by mistake, so it’s fair that they were downvoted.
Anyway, what I mean is:
All kinds of people will hate any given billionaire for umpteen reasons. Most of them don’t mitigate it but Elon Musk does by being relatable. A lot of that is by turning his personal brand into one, big, dank meme.
Elon Musk did so in the vein of Donald Trump, by roughly channeling another pop culture aesthetic you may also be familiar with, Big Dick Energy.
Seeing how both Trump and Sanders boosted their popularity by becoming more relatable and leaning into memes as part of their personal brands, juxtaposed with how unrelatable Hillary Clinton was, with her best attempt at a meme being the still cringe “Pokemon Go. To. The. Polls,” Biden has successfully maintained some relatability through the meme of Dark Brandon.
Dustin Moskowitz has become more relatable to the EA community at large through his increased willingness in the last year or two to engage with us directly, and through dank memes. I think this is genuine and has had a positive impact.
EA is sometimes stigmatized as out of touch, elitist and unrelatable. This is in spite of the fact it focuses on activities that, say, the median American may respect: trying to end factory farming and global poverty without overly guilt-tripping average individuals about it; supporting more political stability in a not-so-partisan way; or reigning in out-of-control industries that threaten civilizational destruction.
Like how Biden has wanted to stabilize the economy, defend democracy, and tackle climate change, he’s still demonized as a woke corporate communist dictator. Dark Brandon succeeds as a meme by ironically mocking the idea that Biden is a force of evil when he comes across as the biggest old fogey ever. Likewise, effective altruists are sometimes similarly demonized.
My tentative conclusion was that, if Dustin Moskowitz were to channel the Dark Brandon aesthetic as Elon Musk has been able to channel Trump’s meme aesthetic, could negative stereotypes about EA be turned on their head to humanize the community and make it more relatable.
I’ve concluded the answer is no for multiple reasons, such as the brand of EA not being as bad as feared in the wake of the FTX collapse, and there not being a need of a big, billionaire personality to fill the shoes left behind by SBF, especially because Dustin and the rest of us want to avoid at all costs the risk of him mistakenly falling ass-backwards into a personality cult around himself.
I know multiple victims/survivors/whatever who were interviewed by TIME, not only one of the named individuals but some of the anonymous interviewees as well.
The first time I cried because of everything that has happened in EA during the last few months was when I learned for the fifth or sixth time that some of my closer friends in EA lost everything because of the FTX collapse.
The second time I cried about it all was today.
After the collapse of FTX, any predictions that the effective altruism movement will die with it are greatly exaggerated. Effective altruism will change that maybe none of us ourselves can even predict but it won’t die.
There are countless haters of so many movements who on the internet will themselves into believing what will happen to that movement when it fails is what they wish will happen. I.e., that the movement will die. Sensationalist polemicists and internet trolls don’t understand history or the world enough to know what they’re talking about when they celebrate the gleeful end of whatever cultural forces they hate.
This isn’t just true for effective altruism. This is true for every such movement towards which anyone takes such a shallow interpretation. If movements like socialism, communism, and fascism can make a worldwide comeback in the 2010s and 2020s in spite of their histories, effective altruism isn’t going to just up and die, not by a longshot.
Small movements (like species with few members, I think[1]) die more quickly, as do younger movements.
Also EA seems to have a quite specific type of person it appeals to & a stronger dependence on current intellectual strands (it did not develop separately in China & the Anglosphere and continental Europe), which seems narrower than socialism/communism/reactionary thought.
I think it’s good to worry about EA disappearing or failing in other ways (becoming a cargo-cult shell of its original form, mixing up instrumental and terminal goals, stagnating & disappearing like general semantics &c).
I’ve tried to find a paper investigating this question, but haven’t been successful—anyone got a link?
Events of the last few months have shown that in the last few years many whistleblowers weren’t taken seriously enough. If they had been, a lot of problems in EA that have come to pass might have been avoided or prevented entirely. They at least could have been resolved much sooner and before the damage became so great.
As much as more effective altruists have come to recognize this in the last year, one case I think deserves to be revisited but hasn’t been is this review of problems in EA and related research communities originally written by Simon Knutsson in 2019, based on his own experiences working in the field.
https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/
I’d be curious about more concretization on this, if possible. I don’t think my current model is that “whistleblowers weren’t taken seriously enough” is the reason a bunch of bad stuff happened here, but there’s something that rhymes with that that I maybe do agree with.
Why are you posting these shortform instead of as a top level post?
I wrote my other reply yesterday from my smartphone and it was hard to tell which one of my short form posts you were replying to, so I thought it was a different one and that’s why my comment from yesterday may not have seemed so relevant. I’m sorry for any confusion.
Anyway, why I’m posting short forms like this too is that they’re thoughts on my mind I want to express for at least some effective altruists to notice, though I’m not prepared right now to contend with the feedback and potential controversy that makings these as top level posts would provoke right now.
It’s long enough to be a top level post, though I didn’t have time around the days these thoughts were on my mind to flesh it out more, with links or more details, or time to address what I’m sure would be a lot of good questions I’d receive. I wouldn’t want to post before it could be of better quality.
I’ve started using my short form to draft stubs or snippets of top level posts. I’d appreciate any comments or feedback on them encouraging me to turn them it top level posts, or, alternatively, feedback even discouraging me from turning them into a top level post if someone would think it’s worthwhile.
This is a section of a EAF post I’ve begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn’t intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I’ve still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.
@JWS 🔸 self-describes as “anti-Bay Area EA.” I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn’t limited to the Bay Area. It’s bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically “Bay Area EA” culture entails the stereotypes-both accurate and misguided—of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.
Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn’t only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn’t @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn’t anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).
In short, EA is an Anglo-American movement and philosophy, if it’s going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself “pro-” or “anti-” Bay Area EA is pointless.
Maybe it’s worth pointing out that Bostrom, Sandberg, and Yudkowsky were all in the same extropian listserv together (the one from the infamous racist email), and have been collaborating with each other for decades. So maybe it’s not precisely a geographic distinction, but there is a very tiny cultural one.
I’m an anti-Bay Area EA because from the stuff I’ve read about sexual harassment in EA it’s like 90% Bay Area EAs doing it, and seems to be enabled by specific subcultural factors there that I don’t see anywhere else in the movement.
Can you say more about this?
I`m guessing this is going to be a controversial post, though I was satisfied when like 10 minutes ago it had net zero karma because I wanted to make a screen cap for a Thanos “perfectly balanced, as all things should be” meme. This isn’t to say that whoever sees this post and feels like voting on it should try upvoting or downvoting it to try getting it to exactly zero karma. That would probably be futile because someone will in short order probably upvote or downvote it to some non-zero value. I just making this extra comment to vent about the fact how frustrating it is that I’ve waited over a year for one of the takes I drop on the EA Forum to have exactly zero karma so I can make a hella dope Thanos EA meme.
Any formal conflict of interest I ever had in effective altruism I shed myself of almost five years ago. I’ve been a local and online group organizer in EA for a decade, so I’ve got lots of personal friends who work at or with support from EA-affilated organizations. Those might be called more informal conflicts of interest, though I don’t know how much they might count as conflicts of interest at all.
I haven’t had any greater social conflicts of interest, like being in a romantic relationship with anyone else in EA, for that long as well.
I’ve never signed a non-disclosure agreement for any EA-affiliated organization I might have had a role at or contracted with for any period of time. Most of what I’m referring to here is nothing that should worry anyone who is aware of the specific details of my personal history in effective altruism. My having dated someone for a few months who wasn’t a public figure or a staffer at any EA-affiliated organization, or me having been a board member in name only for a few months to help get off the ground a budding EA organization that has now been defunct for years anyway, are of almost no relevance or significance to anything happening in EA in 2023.
In 2018, I was a recipient of an Effective Altruism Grant, one of the kinds of alternative funding programs administered by the Centre for Effective Altruism (CEA), like the current Effective Altruism Funds or the Community Building Grants program, though the EA Grants program was discontinued a few years ago.
I was also contracted for a couple months in 2018 with the organization then known as the Effective Altruism Foundation, as a part-time researcher for one of the EA Foundation’s projects, the Foundational Research Institute (FRI), which has for a few years now been succeeded by a newer effort launched by many of the same effective altruists who operated FRI, called the Center for Long-Term Risk (CLTR).
Most of what I intend to focus on posting about on this forum in the coming months won’t be at all about CLTR as it exists today or its background, though there will be some. Much of what I intend to write will technically entail referencing some of the CEA’s various activities, past and present, though that’s almost impossible to avoid when trying to address the dynamics of the effective altruism community as a whole anyway. Most of what I intend to write that will touch upon the CEA will have nothing to do with my past conflict of interest of having been a grant recipient in 2018.
Much of the above is technically me doing due diligence, though that’s not my reason for writing this post.
I’m writing this post because everyone else should understand that I indeed have zero conflicts of interest, that I’ve never signed a non-disclosure agreement, and that for years and still into the present, I’ve had no active desire to work up to netting a job or career within most facets of EA. (Note, Jan. 17: Some of that could change but I don’t expect any of it to change for at least the next year.)
People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the “government of Effective Altruism” but then they kept acting exactly like they were the Government of EA for years and years.
Yet that’s wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX.
I’ve put everything above in the past tense because that was the state of things before 2022. There’s no such thing as a “government of effective altruism” anymore, regardless of whether anyone wants one or not. Neither the CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not.
We can’t go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It’s just us. It’s just you. It’s just me. It’s just left to everyone who is still sticking around in this movement together. We only have each other.
I just posted on the Facebook wall of another effective altruist:
We would all greatly benefit from expressing our gratitude like this to each other more often.
I can’t overstate how much the UX and UI for the EA Forum on mobile sucks. It sucks so much. I know the Online Team at the CEA is endlessly busy and I don’t blame anyone for this as their fault, though the UX/UI on mobile for the EA Forum is abysmal.
Update: it got better.
It should be noted that for most of the period that the Centre for Effective Altruism itself admits and acknowledges as its longest continuous period of a pattern of mistakes from 2016-2020, according to the Mistakes page on the CEA’s website, two of the only three members of the board of directors were Nick Beckstead, Toby Ord and William MacAskill.
(Note, January 15th: as I’m initially writing this and as of right now, I want to be clear and correct about this enough that I’ll be running it by someone from the CEA. If someone from CEA reads this before I contact any of you, please feel free to either reply here or send me a private message for any mistakes/errors I’ve made here.)
(Note, Jan. 16th: I previously stated that Holden Karnofsky was a board member, not Toby. I also stated that this was the board of the CEA in the UK, that was my mistake. I’ve now been corrected by a staffer at the CEA, as I mentioned before that I’d be in contact with. I apologize for my previous errors.)
I’ll probably make a link post with a proper summary later but here is a follow-up from Simon Knutsson on recent events related to longtermism and the EA school of thought out of Oxford.
https://www.simonknutsson.com/on-the-results-of-oxford-style-effective-altruism-existential-risk-and-longtermism/
The FTX bankruptcy broke something in the heart effective altruism but in the process, I’m astonished with how dank it has become. This community was never supposed to be this dank and has never been danker. I never would’ve expected this. It’s absurd.
I thought more this morning about my shortform post from yesterday (https://forum.effectivealtruism.org/posts/KfwFDkfQFQ4kAurwH/evan_gaensbauer-s-shortform?commentId=SjzKMiw5wBe7bGKyT)and I’ve changed my mind about much of it. I expected my post to be downvoted because most people would perceive it as a stupid and irrelevant take. Here are some reasons I disagree now, though I couldn’t guess whether anyone downvoted my post because they took my take seriously but still thought it sucked.
I’ve concluded that Dustin Moskowitz shouldn’t go full Dark Brandon after all. It’d not just be suboptimal. It’d be too risky and could backfire. I don’t know at what point it’d happen specifically, though at some point there’d be diminishing marginal returns to Dustin adopting more of Dark Brandon-esque personal style. In hindsight, I should’ve applied the classic too for so much effective altruism, thinking on the margin, to the question: what is the optimal amount of Dark Brandon Dustin Moskowitz should embrace?
Dustin leaning in a more Dark-Brandon-esque direction couldn’t totally solve any problems EA faces. There are some kinds of problems Dustin doing so couldn’t solve. It could ameliorate the severity of some problems, in particular some image problems EA has.
For those who don’t know at all what I’m getting at, I’m thinking about how Dustin Moskowitz might tweak his public image or personal brand than to improve upon its decent standing right now. Dustin is not the subject of as many conspiracy theories as many other billionaires and philanthropists, especially as one who had his start on Silicon Valley. He’s not the butt of as many jokes as Mark Zuckerberg or Jeff Bezos about how he’s a robot or an alien. If you asked a socialist, or someone who just hates billionaires for whatever reason, to make a list of the ten worst billionaires they hate the most, Dustin Moskowitz is one name that would almost certainly not make it onto the list.
The downside risk of Dustin becoming a more controversial or bold personality gets at the value he provides to EA by being the opposite. That he has been a quieter philanthropist has caused him not to be seen nearly as much as the poster boy for EA as a movement. Hypothetically, for the sake of argument, if Asana went bankrupt for some reason, that would not be nearly as bad for EA as the collapse of FTX was. Dustin not feuding with so many people like Elon Musk has means he doesn’t have nearly as many enemies. That means the EA community overall has far fewer enemies. It’s less hated. It’s not as polarized or politicized. These are all very good things. Much of that is thanks to Dustin being more normal and less eccentric, less volatile and more predictable, and more of a private person than blowhard.
As of June 2022, Holden Karnofsky said he was “currently on 4 boards in addition to Open Philanthropy’s.”
https://www.lesswrong.com/posts/nSjavaKcBrtNktzGa/nonprofit-boards-are-weird
If that’s still the case, that’s too many organizations for a single individual in effective altruism to hold board positions at.
For years now, much ink has been spilled about the promise and peril of the portents, for effective altruism, of dank memes. Many have singled me out as the person best suited to speak to this controversy. I’ve heard, listened, and taken all such sentiments to heart. This is the year I’ve opted to finally to complete a full-spectrum analysis of the role of dank memes as a primary form of outreach and community-building.
This won’t be a set of shitposts on other social media websites. This will be a sober evaluation of dank EA memes, composed of at least one post, if not a series of multiple posts, on the EA Forum. They may be so serious that few, if any, memes will be featured at all. It is time for the dank EA memes to come home.
The age of dankness in effective altruism is upon us. It’s unstoppable. I don’t mean this as the go-to administrator of the Dank EA Memes Facebook group, or as one of the more reputed online memelords/shitposters in the EA movement.
I don’t just mean that as someone other effective altruists turn to for answers in the face of the onslaught of dank memes becoming a primary vector/medium shaping public perception of EA, both within and outside of the effective altruism community itself, for better or worse. I emphasize this as the go-to person other effective altruists turn to as potentially capable of using my influence to reign in the excessive or adverse impact of dank memes on EA. I don’t have that capability.
I can’t reign in dank memes in EA or do whatever anyone else might want me to do. They’re beyond me or any other mods/admins of the Dank EA Memes Facebook group.
The trend is unstoppable. It’s inevitable. The future of EA is dank.
To emphasize just how final this reality is, here is a very incomplete list of leaders in EA who I’m aware have embraced this uncanny reality
Dustin Moskowitz
Holden Karnofsky
Eliezer Yudkowsky
Ben West
Robert Wiblin
Kudos to Rob and Eliezer as early adopters of this trend.
While I think it’s technically unlikely, I still feel like there is a significant chance most of what’s wrong in effective altruism can be made right if Dustin Moskowitz goes full Dark Brandon.
Can you be a bit more precise for what you mean? Even though I’m well aware of the Dark Brandon meme, I still don’t know for sure what you’re referring to.
tl;dr Dustin Moskowitz taking further his use of dank memes to make himself and EA more relatable, by becoming a notorious edgelord, like Elon Musk, or Biden, in the form of Dark Brandon, was something I thought could potentially be a good idea. I’ve since concluded it’s unnecessary, especially given a downside risk like how doing so might inadvertently cause a toxic cult of personality around an unwitting Dustin.
Sometimes I use shortform posts as a notepad for unrefined thoughts that I might refine later. I posted the last couple on mobile which maybe has a default tag of ‘frontpage’ that I didn’t notice. Those shortform posts wound up on the frontpage by mistake, so it’s fair that they were downvoted.
Anyway, what I mean is:
All kinds of people will hate any given billionaire for umpteen reasons. Most of them don’t mitigate it but Elon Musk does by being relatable. A lot of that is by turning his personal brand into one, big, dank meme.
Elon Musk did so in the vein of Donald Trump, by roughly channeling another pop culture aesthetic you may also be familiar with, Big Dick Energy.
Seeing how both Trump and Sanders boosted their popularity by becoming more relatable and leaning into memes as part of their personal brands, juxtaposed with how unrelatable Hillary Clinton was, with her best attempt at a meme being the still cringe “Pokemon Go. To. The. Polls,” Biden has successfully maintained some relatability through the meme of Dark Brandon.
Dustin Moskowitz has become more relatable to the EA community at large through his increased willingness in the last year or two to engage with us directly, and through dank memes. I think this is genuine and has had a positive impact.
EA is sometimes stigmatized as out of touch, elitist and unrelatable. This is in spite of the fact it focuses on activities that, say, the median American may respect: trying to end factory farming and global poverty without overly guilt-tripping average individuals about it; supporting more political stability in a not-so-partisan way; or reigning in out-of-control industries that threaten civilizational destruction.
Like how Biden has wanted to stabilize the economy, defend democracy, and tackle climate change, he’s still demonized as a woke corporate communist dictator. Dark Brandon succeeds as a meme by ironically mocking the idea that Biden is a force of evil when he comes across as the biggest old fogey ever. Likewise, effective altruists are sometimes similarly demonized.
My tentative conclusion was that, if Dustin Moskowitz were to channel the Dark Brandon aesthetic as Elon Musk has been able to channel Trump’s meme aesthetic, could negative stereotypes about EA be turned on their head to humanize the community and make it more relatable.
I’ve concluded the answer is no for multiple reasons, such as the brand of EA not being as bad as feared in the wake of the FTX collapse, and there not being a need of a big, billionaire personality to fill the shoes left behind by SBF, especially because Dustin and the rest of us want to avoid at all costs the risk of him mistakenly falling ass-backwards into a personality cult around himself.