I`m guessing this is going to be a controversial post, though I was satisfied when like 10 minutes ago it had net zero karma because I wanted to make a screen cap for a Thanos “perfectly balanced, as all things should be” meme. This isn’t to say that whoever sees this post and feels like voting on it should try upvoting or downvoting it to try getting it to exactly zero karma. That would probably be futile because someone will in short order probably upvote or downvote it to some non-zero value. I just making this extra comment to vent about the fact how frustrating it is that I’ve waited over a year for one of the takes I drop on the EA Forum to have exactly zero karma so I can make a hella dope Thanos EA meme.
Evan_Gaensbauer
Human Biodiversity (Part 4: Astral Codex Ten)
This is a section of a EAF post I’ve begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn’t intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I’ve still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.
@JWS 🔸 self-describes as “anti-Bay Area EA.” I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn’t limited to the Bay Area. It’s bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically “Bay Area EA” culture entails the stereotypes-both accurate and misguided—of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn’t only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn’t @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn’t anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).
In short, EA is an Anglo-American movement and philosophy, if it’s going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself “pro-” or “anti-” Bay Area EA is pointless.
I’m working on some such resources myself. Here’s a link to the first one, what is up to now a complete list of posts, of the still ongoing series, on the blog Reflective Altruism.
Strongly upvoted
To everyone on the team making this happen:
This seems like it could potentially one day become the greatest thing to which Open Philanthropy, Good Ventures and—by extension—EA ever contribute. Thank you!
To others in EA who may understandably be inquisitive about such a bold claim:
Before anyone asks, “What if EA is one day responsible for ending factory farming or unambiguously reducing existential risk to some historic degree? Wouldn’t that be even greater?”
Yes, those or some of the other highest ambitions among effective altruists might be greater. Yet there’s so much less reason to be confident EA can be that fulcrum for ending those worst of problems. Ending so much lead exposure in every country on Earth could be the most straightforward grand slam ever.
When I mention it could be the greatest, though, that’s not just between focus areas in EA. That’s so meta and complicated that the question of which focus area has the greatest potential to do good has still generally never been resolved. It’s sufficient to clarify this endeavour could have the potential to be the greatest outcome ever accomplished within the single focus area in EA of global health and development. It could exceed the value of all the money that has ever flown through EA to any charity Givewell has ever recommended.
I’ll also clarify I don’t mean “could” with that more specific claim in some euphemistic sense, of making some confident but vague claim to avoid accountability in making a forecast. I just mean “could” in the sense that it’s a premise worth considering. The fact there’s even a remote chance this could exceed everything achieved with EA to treat neglected tropical diseases is remarkable enough.
Indeed, something is lost even when AI makes dank memes.
I agree that it’s not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it’s reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if “EA” is going to do it, then we don’t need to worry about any of those things. I’m sure someone can just do it, right?
I am at least one someone who not only can, but already has decided that I will, at least begin doing it. To that end, for myself or perhaps even others, there are already some individuals I have in mind to begin contacting who may be willing to provide at least a modicum of funding, or would know others who might be willing to do so. In fact, I have already begun that process.
There wouldn’t be a tradeoff with other uses of at least some of that money, given I’m confident at least some of those individuals would not donate or otherwise use that money to support, e.g., some organization affiliated with, or charity largely supported by, the EA community. (That would be due to some of the individual funders in question not being effective altruists.) While I agree it may not be a good idea for EA as a whole to go about this in some quasi-official way, I’ve concluded there aren’t any particularly strong arguments made yet against the sort of “someone” you had in mind doing so.
bem of the While recognizing the benefits of the anti-”EA should” taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.
One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write “X should be done,” it is far from certain others will agree, and then additional significant time to figure out/write “and it should be done by Y,” then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like “who”) means that effort can be saved if there’s not enough traction on the basic merits.
As I’ve already mentioned in other comments, I have myself already decided to begin pursuing a greater degree of inquiry, with haste. I’ve publicly notified others who’d offer pushback solely on the basis of reinforcing or enforcing such a taboo is likely to only motivate to do so with more gusto.
knowledge, or resources relevant to part of a complex question
I have some knowledge and access to resources that would be relevant to solving at least a minor but still significant part of that complex question. I refer to the details in question in my comment that I linked to above.
This isn’t a do-ocracy project. Doing it properly is not going to be cheap (e.g., hiring an investigative firm), and so ability to get funded for this is a prerequisite. Expecting a Forum commenter to know who could plausibly get funding is a bit much. To the extent that that is a reasonable expectation, we would also expect the reader to know that—so it is a minor defect. To the extent that who could get funded is a null set, then bemoaning a perceived lack of willingness to invest in a perceived important issue in ecosystem health is a valid post.
To the extent I can begin laying the groundwork for a more thorough investigation to follow what is beyond the capacity of myself and prospective collaborators further, such an investigation will now at least start snowballing as a do-ocracy project. I know multiple people who could plausibly begin funding this, who themselves in turn may know several other people who’d be willing to do it. Some of the funders in question may be willing to uniquely fund myself, or a team I could (co-)lead, to begin doing the investigation in at least a semi-formal manner.
That would be some quieter critics in the background of EA, or others who are no longer effective altruists but have definitely long wanted such an investigation to like has begun to proceed. Why they might trust me in particular is due to my reputation in EA community for years now as being one effective altruist who is more irreverent towards the pecking orders or hiearchies, both formal and informal, of any organized network or section of the EA movement. At any rate, at least to some extent, a lack of much willingness from within the EA to fund the first steps of an inquiry is no longer a relevant concern. I don’t recall if we’ve interacted much before, though as you may soon learn, I am someone in the orbit of effective altruism who sometimes has an uncanny knack for meeting unusual or unreasonable expectations.
Many good investigations do not have a specific list of people/entities who are the target of investigatory concern at the outset. They have a list of questions, and a good sense of the starting points for inquiry (and figuring out where other useful information lies).
Having begun several months ago thinking of pursuing what I can contribute to such a nascent investigation, I already have in mind a list of several people in mind, as well as some questions, starting points for inquiry, and an approach for how to further identify potentially useful information. I intend to begin drafting a document to organize the process I have in mind, and I may be willing to privately share it in confidence with some individuals. You would be included, if you would be interested.
I have already personally decided to begin pursuing myself inquiries and research that would constitute at least some aspects of the sort of investigation in question. Much of what I generally have in mind, and in particular what I’d be most capable of doing myself, would be unrelated to EVF UK. If it’d make it easier, I’m amenable to perhaps avoiding probing in ways that intersect with EVF UK until the CC inquiry has ended. (This probably wouldn’t include EVF USA). That EVF is in the process of disbanding, which would complicate any part of such an investigation, as well as the fact another major EA organization is likely in the process of launching an earning to give incubator/training organization, are two reasons I will be expediting this project.
The question of a community-wide vote, on any level, about whether there should be such an investigation might at this point be moot. I have personally offered to begin conducting significant parts of such an investigation myself. Since I made that initial comment, I’ve now read several more providing arguments against the need or desirability for such an investigation. Having found them unconvincing, I now intend privately contact at least several private individuals—both in and around the EA movement, as well as some outside of or who no longer participate in the EA community—to pursue that end. Something like a community-wide vote, or some proxy like even dozens of effective altruists trying to talk me out of that, would be unlikely to convice me to not do so.
legal counsels were generally strongly advising people against talking about FTX stuff in general
Will MacAskill waited until April to speak fully and openly on the extra cautious advice of legal counsel. If that period ended to the point Will spoke to the matter of the FTX collapse, and the before and after, has he had ever wanted to, surely almost everyone could do the same now. The barrier or objection of not talking according to the strong advice of legal counsel seems like it’d be null for most people at this point.
Edit: in the 2 hours since I first made this comment, I’ve read most of the comments with arguments both for and against why someone should begin pursuing at least some parts of what could constitute an overall investigation as has been suggested. Finding the arguments for doing so far better than the arguments against, I have now decided to personally begin pursuing the below project. Anyone interested in helping or supporting me in that vein, please reply to this comment, or contact me privately. Any number of messages I receive along the lines of “I think this is a bad idea, I disagree with what you intend to do, I think this will be net negative, please don’t do this”, etc., absent other arguments, are very unlikely to deter me. On the contrary, if anything, such substanceless objections may motivate me to pursue this end with more vigour.
I’m not extremely confident I could complete an investigation of the whole of the EA community’s role in this regard at the highest level all by myself, though I am now offering to investigate or research parts of this myself. Here’s some of what I could bring to the table.I’d be willing to do some relatively thorough investigation from a starting point of being relatively high-context. For those who wouldn’t think I’d be someone who knows a lot of context here, this short form post I made a while ago could serve as proof of concept I have more context than you might expect. I could offer more information, or answer more questions others have, in an attempt to genuinely demonstrate how much context I have.
I have very little time constraints compared to perhaps most individuals in the EA community who might be willing or able to contribute to some aspect of such an investigation. Already on my own time, I occasionally investigate issues in and around EA by myself. I intend to do so more in the future. I’d be willing to research more specific issues on my own time if others were to provide some direction. Some of what I might pursue further may be related to FTX anyway without urging from others.
I’d be willing to volunteer a significant amount of time doing so, as I’m not currently working full-time and may not be working full-time in the foreseeable future. If the endeavour required a certain amount of work or progress achieved within a certain time frame, I may need to be hired in some capacity to complete some of the research or investigating. I’d be willing to accept such an opportunity as well.
Having virtually no conflict of interests, there’s almost nothing anyone powerful in or around EA could hold over me to attempt to stop me from trying to investigate.
I’m champing at the bit to make this happen probably about as much as anyone.
I would personally find the contents of any aspect of such an investigation to be extremely interesting and motivating.
I wouldn’t fear any retaliation whatsoever. Some attempts or threats to retaliate against me could be indeed be advantageous for me, as I am confident they would fail to achieve their desired goals, and thus serve as evidence to others that any further such attempts would be futile wastes of efforts.
I am personally in semi-regular contact or have decent rapport with some whistleblowers or individuals who retain private information about events related to the whole saga of FTX dating back to 2018. They, or their other peers who’ve also exited the EA community in the last several years, may not be willing to talk freely with most individuals in EA who might participate in such an investigation. I am very confident at least some of them would be willing to talk to me.
I’m probably less nervous personally, i.e., being willing to be radically transparent and honest, about speaking up or out about anything EA-related than most people who have continuously participated in the EA community for over a decade. I suspect that includes even you and Oliver Habryka, who have already been noted in other comments here as among those in that cohort who are the least nervous. Notably that may at this point be a set of no more than a few hundred people.
To produce common-knowledge documents to help as large a subset of the EA community, if not the whole community, to learn what happened, and what could be done differently in the future, would be a goal of any such investigation that I could be most motivated to accomplish. I’d be much more willing to share such a document more widely than most other people who might be willing or able to produce one.
- 15 Aug 2024 6:03 UTC; 2 points) 's comment on Quick Update on Leaving the Board of EV by (
- 15 Aug 2024 5:30 UTC; 2 points) 's comment on Quick Update on Leaving the Board of EV by (
- 15 Aug 2024 6:23 UTC; 2 points) 's comment on Quick Update on Leaving the Board of EV by (
- 15 Aug 2024 6:49 UTC; 2 points) 's comment on Quick Update on Leaving the Board of EV by (
Someone asked if it was me who posted the above comment because I’m also the one who made the Facebook post in question. They said I should clarify in case someone else presumed it might be me. I’ll make this clear: I don’t astroturf. The real ones out there like me will post whatever on the EA Forum and take any downvotes like a man. Let them come as they may.
This can especially be the case in a crucial way when a hyper-focus on race by itself can derail attention needed on the highest-stakes issues in terms of human genetic engineering that would impact people of all races. This could probably be the decade of any since the around 1970s when these debates started that they’ll no longer just be thought experiments debated by bioethicists.
At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI—a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA.
>They don’t have short timelines like me, and therefore chuck it out completely
Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists.>Are struggling to imagine a hostile public response to 15% unemployment rates
Those in PauseAI and similar movements don’t.>Copium
While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it’ll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of ‘hope’ on a technical level, though on a gut level I don’t have faith in EA. I definitely don’t blame those who have any faith left in EA, let alone those who see hope in it.
Much of the difference here is the mindset towards ‘people’, and how they’re modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual among general trends is barely relevant.) The last couple years have proven that effective altruists direly underestimated the public, and the latter group of people didn’t. While many here on the EA Forum may not agree that much—or even most—of what movements like PauseAI are doing are as effective as they could or should be, they at least haven’t succumbed to a plague of doomerism beyond what can seemingly even be justified.
To quote former effective altruist Kerry Vaughan, in a message addressed to those who still are effective altruists: “now is not the time for moral cowardice.” There are some effective altruists who heeded that sort of call when it was being made. There are others who weren’t effective altruists who heeded it too, when they saw most effective altruists had lost the will to even try picking up the ball again after they dropped it a couple times. New alliances between emotionally determined effective altruists and rationalists, and thousands of other people the EA community always underestimated, might from now on be carrying the team that is the global project of AI risk reduction—from narrow/near-term AI, to AGI/ASI.
EA can still change, though either it has to go beyond self-reflection and just change already, or get used to no longer being team captain of AI Safety.
Leverage was an EA-aligned organization, that was also part of the rationality community (or at least ‘rationalist-adjacent’), about a decade ago or more. For Leverage to be affiliated with the mantles of either EA or the rationality community was always contentious. From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn’t have then, and couldn’t now, say or do more to disown and disavow Leverage’s practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever. They have. To be clear, so has Leverage in its own way.
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with—to put it bluntly—the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. In time, Leverage came to take that in stride, as the break-up between Leverage, and the rest of the institutional polycule that is EA/rationality, was extremely mutual.
Ien short, the course of events, and practices at Leverage that led to them, as presented by Zoe Curzi and others as a few years ago from that time circa 2018 to 2022, can scarcely be attributed to either the rationality or EA communities. That’s a consensus between EA, Leverage, and the rationality community agree on—one of few things left that they still agree on at all.
I wouldn’t and didn’t describe that section of the transcript, as a whole, as essentially true. I said much of it is. As the CEO might’ve learned from Tucker Carlson, who in turned learned from FOX News, we should seek to be ‘fair and balanced.’
As to the debugging part, that’s an exaggeration that must have come out the other side of a game of broken telephone on the internet. It seems that on the other side of that telephone line would’ve been some criticisms or callouts I’ve read years ago of some activities happening in or around CFAR. I don’t recollect them in super-duper precise detail right now, nor do I have the time today to spend an hour or more digging them up on the internet
For the perhaps wrongheaded practices that were introduced into CFAR workshops for a period of time other than the ones from Leverage Research, I believe the others were some introduced by Valentine (e.g., ‘againstness,’ etc.). As far as I’m aware, at least as it was applied at one time, some past iterations of Connection Theory bore at least a superficial resemblance to some aspects of ‘auditing’ as practiced by Scientologists.
As to perhaps even riskier practices, I mean they happened not “in” but “around” CFAR in the sense of not officially happening under the auspices of CFAR, or being formally condoned by them, though they occurred within the CFAR alumni community and the Bay Area rationality community. It’s murky, though there was conduct in the lives of private individuals that CFAR informally enabled or emboldened, and could’ve/should’ve done more to prevent. For the record, I’m aware CFAR has effectively admitted those past mistakes, so I don’t want to belabor any point of moral culpability beyond what has been drawn out to death on LessWrong years ago.
Anyway, activities that occurred among rationalists in the social network that in CFAR’s orbit, that arguably arose to the level of triggering behaviour comparable in extremity to psychosis, include ‘dark arts’ rationality, and some of the edgier experiments of post-rationalists. That includes some memes spread and behaviours induced in some rationalists by Michael Vassar, Brent Dill, etc.
To be fair, I’m aware much of that was a result not of spooky, pseudo-rationality techniques, but some unwitting rationalists being effectively bullied into taking wildly mind-altering drugs, as guinea pigs in some uncontrolled DIY experiment. While responsibility for these latter outcomes may not be as attributable to CFAR, they can be fairly attributed to some past mistakes of the rationality community, albeit on a vague, semi-collective level.
I was telling organizers with PauseAI like Holly Elmore they should be emphasizing this more several months ago.
I’m tentatively interested in participating in some of these debates. That’d depend on details of how the debates would work or be structured.