Yeah, I heard about that. As far as I can tell, the reason it failed was for reasons specific to the particular implementation here, and not due to the broader idea of implementing a project like this. In addition, Duncan has on multiple occasions expressed support for the idea of running a similar project that can learn from the mistakes made here. So my question is, why haven’t more organizations like that been started?
Peter Berggren
Thanks for the advice. I was saying that this type of community might be good, not just because I would benefit, but because I know a lot of other people who also would. And that due to a lot of arbitrary-seeming concerns, it’s likely highly neglected.
First off, I specifically spoke to the LessWrong moderation team in advance of writing this, with the intention of rephrasing my questions so they didn’t sound like I was trying to make a point. I’m sorry if I failed in that, but making particular points was not my intention. Second of all, you seem to be taking a very adversarial tone to my post when it was not my intention to take an adversarial tone.
Now, on to my thoughts on your particular points.
I have in fact considered that the rest of EA is incentivized to pretend that there aren’t problems. In fact, I’d assume that most of EA has. I’m not accusing the Community Health team of causing any particular scandal; just of broadly introducing an atmosphere where comparatively minor incidents may potentially get blown out of proportion.
There seem to be clear and relevant parallels here. Seven of the fifteen people named as TESCREALists in the First Monday paper are Jewish, and many stereotypes attributed to TESCREALists in this conspiracy theory (victimhood complex, manipulating our genomes, ignoring the suffering of Palestinians) line up with antisemitic stereotypes and go far beyond just “powerful people controlling things.”
I want to do maximizing myself because I was under the impression that EA is about maximizing. In my mind, if you just wanted to do a lot of good, you’d work in just about any nonprofit. In contrast, EA is about doing the most good that you can do.
A few questions about recent developments in EA
And one more thing: if some people are nervous, wouldn’t it be possible to get funded from people who are enthusiastic?
And what I’m describing isn’t an individual project full of people who live together; it’s coordinating a bunch of people who work on many different projects to move to the same general area. And even if I were describing an individual project full of people who live together, every single failure of such a project within EA is a rounding error compared to the Manhattan Project, for better or worse.
I thought the whole point of EA was that we based our grantmaking decisions on rigorous analyses rather than hunches and anecdotes.
Seems like the kind of thing that should have at least one FTE on it. Is there a reason no one has really put a lot of time into it (e.g. a specific compelling argument that this isn’t the right call), or is it just that no one has gotten to it?
How many FTEs are working on this problem?
Additionally, I wonder why there hasn’t been an effort to start a more “intense” EA hub somewhere outside the Bay to save on rent and office costs. Seems like we’re been writing about coordination problems for quite some time; let’s go and solve one.
It is serious, and in my time zone, it wasn’t April 1.
Thanks for the advice. To be clear, I’m not certain that a hardcore environment would be the best environment for me either, but it seems worth a shot. And judging by how people tend to change in their involvement in EA as they get older, I’ll probably only be as hardcore as this for like ten years.
Thanks for the reflection.
I’ve read about Leverage, and it seems like people are unfairly hard on it. They’re the ones who basically started EA Global, and people don’t give them enough credit for that. And honestly, even after what I’ve read about them, their work environment still sounds better to me than a supposedly “normal” one.
Thanks for the advice. I was more wondering if there was some specific organization that was known to give that sort of environment and was fairly universally recognized as e.g. “the Navy SEALs of EA” in terms of intensity, but this broader advice sounds good too.
This was semi-serious, and maybe “totalizing” was the wrong word for what I was trying to say. Maybe the word I more meant was “intense” or “serious.”
CLARIFICATION: My broader sentiment was serious, but my phrasing was somewhat exaggerated to get my point across.
[Question] Where would I find the hardcore totalizing segment of EA?
I think there are two meanings of “distraction” here. The first, more “serious” meaning that the media probably uses is in the more generic sense of “something which distracts people.” The second one, and one that a lot of people in the “AI ethics” community like to use, is a sense in which this was deliberately thought up as a diversion by tech companies to distract the public from their own misconduct.
A problem I see is people equivocating between these two meanings, and thus inadvertently arguing against the media’s weird steel-man version of the AI ethicists’ core arguments, instead of the real arguments they are making.
No one should ever move to the Bay Area, for many reasons. Seattle is fine.
Don’t really think there is any; in fact, there’s plenty of evidence to the contrary, from the polls I’ve seen.
I understand that it’s perilous, but so is donating a kidney, and a large number of EAs have done that anyway.