I mostly haven’t been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn’t a problem, the effective altruism movement seems lackluster in some ways.
I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn’t contribute to making them as ambitious or interested in exploring things outside “conventional EA” as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.
In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-”EA” sources to figure out what more effective interventions GiveWell might be missing perhaps because they’re somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer’s sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.
And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers.
Just some ranty thoughts about EA university groups without any suggestions. Don’t take too seriously.
Lots of university EA group organisers I have met seem to not be very knowledgeable about EA. A common type is someone who had gotten involved for social reasons and uses EA terms in conversations but doesn’t really get it. I can imagine this being offputting to the types of people these groups would like to join. Probably this is less of a problem at top universities though.
It also feels awkward to mention this to people because I know these group organisers have good intentions but they may be turning off cool people from engaging with the EA groups. It is even more awkward when community building is their part-time job. It’s not that they’re bad, it’s just that I wouldn’t be excited about a promising student first coming across EA by interacting with them.
Less confidently, it seems like in some groups there is too much of an emphasis on being very agenty right away and making big projects happen (especially community-building projects) compared to having a culture of intellectual curiosity and prioritising making interesting conversations happen that are not about community building. It also feels like for young people in EA, there are strong incentives to network hard, go to Berkeley, go to a bunch of retreats so you have cool important EA friends and all of this cuts down the time you can just sit down and learn important things, skill up and introspect.
There have been posts on the EA forum pointing at similar things so it feels like the situation might become better over the next year but this is just me recording what my experience has been like at times.
A few months ago I felt like some people I knew within community building were doing a thing where they believed (or believed they believed) that AI existential risk was a really big problem but instead of just saying that to people (eg: new group members), they said it was too weird to just say that outright and so you had to make people go through less “weird” things like content about global health and development and animal welfare before telling them you were really concerned about this AI thing.
And even when you got to the AI topic, had to make people trust you enough by talking about misuse risks first in order to be more convincing. This would have been an okay thing to do if those were their actual beliefs. But in a couple of cases, this was an intentional thing to warm people up to the “crazy” idea that AI existential risk is a big problem.
This bothered me.
To the extent that those people now feel more comfortable directly stating their actual beliefs, this feels like a good thing to me. But I’m also worried that people still won’t just directly state their beliefs and instead still continue to play persuasion games with new people but about different things.
Eg: one way this could go wrong is group organisers try to make it seem to new people like they’re more confident about what interventions within AI safety are helpful than they actually are. Things like: “Oh hey you’re concerned about this problem, here are impactful things you can do right away such as applying to this org or going through this curriculum” when they are much more uncertain (or should be?) about how useful the work done by the org is or how correct/relevant the content in the AI safety curriculum is.
I have a couple thoughts here, as a community builder, and as someone who has thought similar things to what you’ve outlined.
I don’t like the idea of bringing people into EA based on false premises. It feels weird to me to ‘hide’ parts of EA to newcomers. However, I think the considerations involved are more nuanced than this. When I have an initial conversation with someone about what EA is, I find it difficult to capture everything in a way that comes across as sensible. If I say, “EA is a movement concerned with finding the most impactful careers and charitable interventions,” to many people I think this automatically comes across as concerning issues of global health and poverty. ‘Altruism’ is in the name after all. I don’t think many people associated the word ‘altruism’ with charities aimed at ensuring that artificial intelligence is safe.
If I forefront concerns about AI and say, “EA is a movement aimed at finding the most impactful interventions… and one of the top interventions that people in the community care about is ensuring that artificial intelligence is safe,” that also feels like it’s not really capturing the essence of EA. Many people in EA primarily care about issues other than AI, and summarising EA in this way to newcomers is going to turn off some people who care about other issues.
The idea that AI could be a existencial risk is (unfortunately) just not a mainstream idea yet. Over the past several months, it seems like it has been talked about a lot outside of EA, but prior to that, there were very few major media organisations/celebrities that brought attention to it. So from my point of view, I can understand community builders wanting to warm up people to the idea. A minority of people will be convinced by hearing good arguments for the first time. Most people (myself included) need to hear something said again and again in different ways in order to take it seriously.
You might say that these are really simplistic ways of talking about EA, and there’s a lot more than I could say than a couple simple sentences. That’s true, but in many community building circumstances, a couple sentences is all I am going to get. For example, when I’ve run clubs fair booths at universities, many students just want a short explanation of what the group stands for. When I’ve interacted with friends or family members who don’t know what EA is, most of the time I get the sense that they don’t want a whole spiel.
I also think it is not necessarily a ‘persuasion game’ to think about how to bring more people on board with an idea—it is thinking seriously about how to communicate ideas in an effective way. Communication is an art form, and there are good ways to go about it and bad ways to go about it. Celebrities, media organisations, politicians, and public health officials all have to figure out how to communicate their ideas to the public, and it is often not as simple as ‘directly stating their actual beliefs.’ Yes, I agree we should be honest about what we think, but there are many different ways to go about this. for example, I could say, “I believe there’s a decent chance AI could kill us all,” or I could say, “I believe that we aren’t taking the risks of AI seriously enough.” Both of these are communicating a similar idea, but will be taken quite differently.
I found it really difficult to reply to this comment, partly because it is difficult for me to inhabit the mindset of trying to be a representative for EA. When I talk to people about EA, including when I was talking to students who might be interested in joining an EA student group, it is more similar to “I like EA because X, the coolest thing about EA for me is Y, I think Z though other people in EA disagree a bunch with my views on Z for W reason and are more into V instead” rather than trying to give an objective perspective on EA.
I’m just really wary of changing the things I say until it gets people to do the thing I want (sign up for my student group, care about AI safety, etc.) There are some situations when that might be warranted like if you’re doing some policy-related thing. However, when running a student group and trying to get people who are really smart and good at thinking, it seems like the thing I’d want to do is just to state what I believe and why I believe it (even and especially if my reasons sound dumb) and then hearing where the other person agrees or disagrees with me. I don’t want to state arguments for EA or AI safety to new members again and again in different ways until they get on board with all of it, I want us to collaboratively figure things out.
I hope more people, especially EA community builders, take some time to reevaluate the value of growing the EA movement and EA community building. Seems like a lot of community builders are acting as if “making more EAs” is good for its own sake. I’m much less sure about the value of growing the EA community building and more uncertain about whether it is positive at all. Seems like a lot of people are having to put in energy to do PR, make EA look good, fight fires in the community when their time could be better spent directly focusing on how to solve the big problems.
But I also think directly focusing on how to solve the big problems is difficult and “get more people into EA and maybe some of them will know how to make progress” feels like an easy way out.
My intuition is that having more people does mean more potential fires could be started (since each person could start a fire), but it also means each fire is less damaging in expectation as it’s diluted over more people (so to speak). For instance, the environmentalist movement has at times engaged in ecoterrorism, which is (I think pretty clearly) much worse than anything anyone in EA has ever done, but the environmentalist movement as a whole has generally weathered those instances pretty well as most people (reasonably imho) recognize that ecoterrorists are a fringe within environmentalism. I think one major reason for this is that the environmentalist movement is quite large, and this acts as a bulwark against the entire movement being tarred by the actions of a few.
I guess I make comments like the one I made above because I think fewer people doing EA community building are seriously considering that the actual impact (and expected impact) of the EA movement could be net negative. It might not be, and I’m leaning towards it being positive but I think it is a serious possibility that EA movement causes more harm than good overall, for example via having sped up AI timelines due to DeepMind/OpenAI/Anthropic and a few of the EA community members committing one of the biggest frauds ever. Or more vague things like EAs fuck up cause prioritisation, maximise really hard, and can’t course correct later.
The reason why EA movement could end up being not net harmful is when we are ambitious but prioritise being correct and having good epistemics really hard. This is not the vibe I get when I talk to many community builders. A lot of them seem happy with “make more EAs is good” and forget that the mechanism for EA being positively impactful relies pretty heavily on our ability to steer correctly. I think they’ve decided too quickly that “EA movement good therefore I must protect and grow it”. I think EA ideas are really good, less sure about the movement.
I like EA ideas, I think my sanely trying to solve the biggest problems is a good thing. I am less sure about the current EA movement, partly because of the track record of the movement so far and partly because of intuitions that movements that are as into gaining influence and recruiting more people will go off track and it doesn’t to me look like there’s enough being done to preserve people’s sanity and get them to think clearly in the face of the mind-warping effects of the movement.
I think it could both be true that we need a healthy EA (or longtermist) movement to make it through this century and that the current EA movement ends up causing more harm than good. Just to be clear, I currently think that in the current trajectory, the EA movement will end up being net good but I am not super confident in this.
Also, sorry my answer is mostly just coming from thinking about AI x-risk stuff rather than EA as a whole.
Huh, not sure what you mean. Sure seems like the FTX fraud was committed by prominent EAs, in the name of EA principles, using the resources of the EA movement. In as much as EA has caused anything, I feel like it has caused the FTX fraud.
Like, by the same logic you could be like “EA didn’t cause millions of dollars to be allocated to malaria nets”. And like, yeah, there is something fair about that, in the sense that it was ultimately individual people or philanthropists who gave money to EA causes, but at the end of the day, if you get to take some credit for Dustin’s giving, you also have to take some blame for Sam’s fraud.
I really, really, realllllly disagree. Saying that EA caused FTX is more like saying EA caused Facebook than the contrapositive. You should have a pretty firm prior that someone who becomes a billionaire does it primarily because they enjoy the immense status and prestige that being “the world’s richest U30” bestows on a person; likewise someone committing fraud to keep that status.
My primary character assessment at this point is that he was an EA who was also one of those flavors of people who become quasi-sociopaths when they become rich and powerful. Nothing in Sam’s actual, concrete actions seem to indicate differently, and indeed he actually spent the grander part of that money on consumption goods like mansions for himself and his coconspirators. Maybe he really was in it for the good, at the beginning, but I just can’t believe that someone making a late decision to start a ponzi scheme “for the greater good” would act like he did.
(Also, using the resources of the EA movement how, exactly? Seems to me like his fraud would have been just as effective had he not identified as an EA. He received investment and consumer funds because of the firm’s growth rate and Alameda’s generous trades, respectively, not because people were interested in contributing to his charities.)
I don’t really understand the distinction here. If a core member of the EA community had founded Facebook, recruiting for its leadership primarily from members of EA, and was acting throughout as a pretty prominent member of the EA community, I would also say that “EA had a substantial responsibility in causing Facebook”. But actual Facebook was founded before EA was even a thing, so this seems totally non-comparable to me.
And while I don’t really buy your character-assessment, I don’t really see what this has to do with the blame analysis. If EA has some prominent members who are sociopaths, we should take responsibility for that in the same way as we would take credit for some prominent members who are saints.
Separately, this part seems confidently wrong:
grander part of that money on consumption goods like mansions for himself and his coconspirators
I am quite confident Sam spent <$100MM on consumption, and the FTX Future Fund has given away on the order of $400MM in grants, so this statement is off by around a factor of 2, and more likely by a full order of magnitude, though that depends a bunch on how you count political contributions and other stuff that’s kind of ambiguously charity vs. helpful to Sam.
the FTX Future Fund has given away more than $400MM in grants
Do you have links/evidence here? I remember counting less than 250M when I looked at their old website, not even accounting for some of the promised grants that presumably never got paid out.
I remember this number came up in conversations at some point, so don’t have any source. Plausible the number is lower by a factor of 2 (I actually was planning to change that line to reflect my uncertainty better and edit to “on the order of $400MM grants” since $600MM wouldn’t have surprised me, and neither would have $200MM).
I don’t really understand the distinction here. If a core member of the EA community had founded Facebook...[snip]...and was acting throughout as a pretty prominent member of the EA community, I would also say that “EA had a substantial responsibility in causing Facebook”
I likewise don’t understand what you’re finding weird about my position? If Eliezer Yudkowsky robbed a bank, that wouldn’t make LessWrong “responsible for a bank robbery”, even if Eliezer Yudkowsky were in the habit of donating a proportion of his money to AI alignment organizations. Looking at the AU-EY grabbing the money out of the brown paper bag and throwing it at strippers, you would conclude he mostly did it for his own reasons, just like you would say of a robber that happened to be a congressman.
If we could look into AU-EY’s mind and see that he thought was doing it “in the name of EA”, and indeed donated the robbed funds to charity, then, sure, I’d freely grant that EA is at least highly complicit—but my point is that I don’t believe that was SBF’s main motivation for founding FTX, and think absent EA he probably had a similar outset chance of running such frauds. You can say that SBF’s being a conditional sociopath is immaterial to his reducing “the group of people with the EA sticker’s point total”, but it’s relevant for answering the more productive question of whether EA made him more or less likely to commit massive fraud.
[unsnip]...recruiting for its leadership primarily from members of EA...[/unsnip]
Well, I guess recruiting from EA leadership is one thing, but to what extent did FTX actually benefit from an EA-affiliated talent pool? I reviewed most of the executive team during my manifold betting and didn’t actually come across anybody who I could find had a history of EA affiliation besides SBF (though you may know more than me).
I am quite confident Sam spent <$100MM on consumption, and the FTX Future Fund has given away more than $400MM in grants, so this statement is off by a factor of 4, and more likely by a full order of magnitude.
I actually didn’t know that. Is this counting the Anthropic investment or did FTXFF really give-away give-away that much money?
I reviewed most of the executive team during my manifold betting and didn’t actually come across anybody who I could find had a history of EA affiliation besides SBF (though you may know more than me).
That… seems really confused to me. Caroline was part of EA Stanford, almost all the early Alameda staff was heavily-involved EAs (including past CEA CEO Tara MacAulay). I know less about Nishad but he was definitely very heavily motivated by an EA philosophy while he was working at FTX, had read a lot of the LessWrong content, etc.
According to FTX’s director of engineering Nishad Singh, Alameda “couldn’t have taken off without EA,” because “all the employees, all the funding—everything was EA to start with.”
It seems really quite beyond a doubt to me that FTX wouldn’t have really existed without the EA community existing. Even the early funding for Alameda was downstream of a bunch of EA funders.
I likewise don’t understand what you’re finding weird about my position? If Eliezer Yudkowsky robbed a bank, that wouldn’t make LessWrong “responsible for a bank robbery”
I mean, if Eliezer robbed a bank, I think I would definitely think the rationality community is responsible for a bank robbery (not “LessWrong”, which is a website). That seems like the only consistent position by which the rationality community can be responsible for anything, including good things. If the rationality community is not responsible for Eliezer robbing a bank, then it definitely can’t be responsible for any substantial fraction of AI Alignment research either, which is usually more indirectly downstream of the core people in the community.
It seems really quite beyond a doubt to me that FTX wouldn’t have really existed without the EA community existing. Even the early funding for Alameda was downstream of a bunch of EA funders.
Yeah, I guess I’m just wrong then. I’m confused as to why I didn’t remember reading the bit about Caroline in particular—it’s literally on her wikipedia page that she was an EA at Stanford.
I mean, if Eliezer robbed a bank, I think I would definitely think the rationality community is responsible for a bank robbery (not “LessWrong”, which is a website). That seems like the only consistent position by which the rationality community can be responsible for anything, including good things. If the rationality community is not responsible for Eliezer robbing a bank, then it definitely can’t be responsible for any substantial fraction of AI Alignment research either, which is usually more indirectly downstream of the core people in the community.
FWIW I still don’t understand this perspective, at all. It seems bizarre. The word “responsible” implies some sort of causal relationship between the ideology and the action; i.e., Eliezer + Exposure to/Existence of rationalist community --> Robbed bank. Obviously AI Alignment research is downstream of rationalism, because you can make an argument, at least, that some AI alignment research wouldn’t have happened if those researchers hadn’t been introduced to the field by LessWrong et. al. But just because Eliezer does something doesn’t mean rationalism is responsible for it any more than Calculus or the scientific method was “responsible” for Isaac Newton’s neuroticisms.
It sounds like the problem is you’re using the term “Rationality Community” to mean “all of the humans who make up the rationality community” and I’m using the term “Rationality Community” to refer to the social network. But I prefer my definition, because I’d rather discuss the social network and the ideology than the group of people, because the people would exist regardless, and what we really want to talk about is whether or not the social network is +EV.
The word “responsible” implies some sort of causal relationship between the ideology and the action
No, it implies a causal relationship between the community and the action. I don’t see any reason to constrain blame to “being caused by the ideology of the community”. If members of the community cause it, and the existence of the community had a pretty direct effect, then it sure seems like you should hold the community responsible.
In your last paragraph, you sure are also conflating between “ideology” and “social network”. It seems really clear that the social network of EA played a huge role in FTX’s existence, so it seems like you would agree that the community should play some role, but then for some reason you are then additionally constraining things to the effect of some ill-specified ideology. Like, can a community of people with no shared ideology literally not be blamed for anything?
It seems really clear that the social network of EA played a huge role in FTX’s existence, so it seems like you would agree that the community should play some role, but then for some reason you are then additionally constraining things to the effect of some ill-specified ideology
No, I agree with you now that at the very least EA is highly complicit if not genuinely entirely responsible for causing FTX.
I don’t think we actually disagree on anything at this point. I’m just pointing out that, if the community completely disbanded and LessWrong shut down and rationalists stopped talking to each other and trained themselves not to think about things in rationalist terms, and after all that AU-Yudkowsky still decided to rob a bank, then there’s a meaningful sense in which the Inevitable Robbery was never “the rationality community’s” fault even though AU-Yudkowsky is a quintessential member. At least, it implies a different sort of calculus WRT considering the alternative world without the rationality community.
For instance, the environmentalist movement has at times engaged in ecoterrorism, which is (I think pretty clearly) much worse than anything anyone in EA has ever done
Alas, I do think this defense no longer works, given FTX, which seems substantially worse than all the ecoterrorism I have heard about (and IMO also the capabilities research that’s downstream of our work, like RLHF being the primary difference between Chat-GPT and GPT-3, but that’s a longer argument, and I wouldn’t want to bring it up as a commonly-acknowledged point)
Alas, I do think this defense no longer works, given FTX, which seems substantially worse than all the ecoterrorism I have heard about.
I disagree with this because I believe FTX’s harm was way less bad than most ecoterrorism, primarily because of the disutility involved. FTX hasn’t actually injured or killed people, unlike a lot of ecoterrorism. It stole billions, which isn’t good, but right now no violence is involved. I don’t think FTX is good, but so far no violence has been attributed or even much advocated by EAs.
Yeah, doesn’t seem like a totally crazy position to take, but I don’t really buy it. I bet a lot of people would take a probability of having violence inflicted on them in exchange for $8 billion dollars, and I don’t think this kind of categorical comparison of different kinds of harm checks out. It’s hard to really imagine the scale of $8 billion dollars, but I am confident that Sam’s action have killed, indirectly via a long chain of actions, but nevertheless directly responsibly, at least 20-30 people, which I think is probably more than any ecoterrorism that has been committed (though I am not that confident about the history of ecoterrorism, so maybe there was actually something that got to that order of magnitude?)
IMO I think Ecoterrorism’s deaths were primarily the Unabomber, which was at least 3 deaths and 23 injuries. I may retract my first comment if I don’t have more evidence than this.
The unabomber does feel kind of weird to blame on environmentalism. Or like, I would give environmentalism a lot less blame for the unabomber than I would give us for FTX.
In the past, I think I cared excessively about EA and me myself seeming respectable, and I think I was wrong about the tradeoffs there. As one concrete example of this, when talking to people about AI safety I avoided linking a bunch to blog posts even when I thought they were more useful to read and instead sent people links to more legitimate-seeming academic papers and people because I thought that made the field seem more credible. I think this and other similar things I did were bad.
I didn’t care about people’s character, and how much integrity they had enough in the past. I was very forgiving when I found out someone had intentionally broken a small promise to a colleague or acted in a manipulative way towards someone because in those cases, it seemed to me like the actual magnitude of the harm caused ended up being small relative to the impact of the person’s work. I now think those small harms add up and could be quite costly by adding mistrust and friction to interactions with others in the community.
I dismissed AI x-risk concerns in 2019 without making an honest attempt to learn about the arguments because it sounded weird. I think that was a reasonable thing to do given the social environment I was in. I think the really big mistake I made there though was not stating why I was unconvinced to my friends who had thought about it because I was afraid of seeming dumb due to not having read all the arguments already and because I was afraid of feeling pressured if I did try to argue about it.
I think I thought university EA group organising was the most useful thing for me to do. This seemed sensible to believe for a while but I think I stuck with it for too long because I had signed up to do it. If I had been honestly looking for evidence of the usefulness of the group organising activities I was doing, I would have realised a lot quicker that it was more useful for me to stop and do other things instead. I think this cost me ~100 hours.
There are different ways to approach telling people about effective altruism (or caring about the future of humanity or AI safety etc):
“We want to work on solving these important problems. If you care about similar things, let’s work together!”
“We have figured out what the correct things to do are and now we are going to tell you what to do with your life”
It seems like a lot of EA university group organisers are doing the second thing, and to me, this feels weird and bad. A lot of our disagreement about specific things, like how I feel it is icky to use prepared speeches written by someone else to introduce people to EA and bad to think of people who engage with your group in terms of where they are in some sort of pipeline, is about them thinking about things in that second frame.
I think the first framing is a lot healthier, both for communities and for individuals who are doing activities under the category of “community building”. If you care deeply about something (eg: using spreadsheets to decide where to donate, forming accurate beliefs, reducing the risk we all die due to AI, solving moral philosophy, etc) and you tell people why you care and they’re not interested, you can just move along and try to find people who are interested in working together with you in solving those problems. You don’t have to make them go through some sort of pipeline where you start with the most appealing concepts to build them up to the thing you actually want them to care about.
It is also healthier for your own thinking because putting yourself in the mindset of trying to persuade others, in my experience, is pretty harmful. When I have been in that mode in the past, it crushed my ability to notice when I was confused.
I also have other intuitions for why doing the second thing just doesn’t work if you want to get highly capable individuals who will actually solve the biggest problems but in this comment, I just wanted to point out the distinction between the two ways of doing things. I think they are distinct mindsets that lead to very different actions.
It does feel like I’m defecting a little bit by using a pseudonymous account. I do feel like I’m somewhat intentionally trying to inject my views while getting away with not paying the reputational cost of having them.
My comments use fewer caveats than they would if I were posting under my real name, and I’m more likely to blurt things I currently think without spending lots of time thinking about how correct I am. I also feel under little obligation to signal thoughtfulness and niceness when not writing using my name. Plausibly this contributes to lowering the quality of the EA forum but I have found it helpful as a (possibly temporary?) measure to practise posting anything at all. I think that I have experiences/opinions that I want others on the EA forum to know about but don’t want to do the complicated calculation to figure out if it is worth posting them under my real name (where a significant part of the complicated calculation is non-EA people coming across them while searching for me on the internet).
I also prefer the situation where people in the EA community can anonymously share their controversial views over the situation where they don’t say anything at all because it makes it easier to get a more accurate pulse of the movement. I mostly have spent lots of time in social groups where saying things I think are true would have been bad for me and I do notice that it did cause my thinking to be a bit stunted as I avoid thinking thoughts that would be bad to share. Writing pseudonymously feels helpful for noticing that problem and practising thinking more freely.
Also idk pseudonyms are really fun to use, I like the semi-secret identity aspect of using them.
Yeah, pseudonyms are great. There’s been recent debates about people using one-off burner accounts to make accusations, but those don’t reflect at all on the merits of using durable pseudonyms for general conversation.
The degree of reputation and accountability that durable pseudonyms provide might be less than using a wallet name, but it’s still substantial, and in practice it’s a perfectly sufficient foundation for good discourse.
I still think I’ve found being pseudonymous more useful than writing under my name. It does feel like I’m less restricted in my thinking because I know there are no direct negative or positive effects on me personally for sharing my thoughts. So for example, I’ve found it easier to express genuine appreciation for things or people surprisingly. Perhaps I’m too obsessed with noticing how the shape of my thoughts changes depending on how I think they will be perceived but it has been very interesting to notice that. Like it genuinely feels like there are more thoughts I am allowed to think when I’m trying on a pseudonym (I think this was much starker a few months ago so maybe I’ve squeezed out most of the benefit by now).
Maybe folks funding community building at universities should make the option of: “doing paid community building stuff at the start of term for a couple of months and then focusing on personally skilling up and learning more things for the rest of the year” more obvious. For a lot of group organisers, this might be a better idea than consistently spending 10 hours/week group organising throughout the year.
Since EAG: SF is coming up and not all of us will get to talk to everyone it would be valuable for us to talk to, it would be cool for people to document takeaways from specific conversations they had with people (with those people’s consent) rather than just takeaways of the entire conference, which people have posted on the forum in the past.
I feel like I basically have no idea, but if I had to guess I’d say ~40% of current human lives are net-negative, and the world as a whole is worse than nothing for humans alive today because extreme suffering is pretty bad compared to currently achievable positive states. This does not mean that I think this trend will continue into the future; I think the future has positive EV due to AI + future tech.
I share these intuitions and it is a huge part of the reason reducing x-risk feels so emotionally compelling to me. It would be so sad for humanity to die out so young and unhappy never having experienced the awesome possibilities that otherwise lie in our future.
Like, the difference between what life is like and the sorts of experiences we can have right now and just how good life could be and the sorts of pleasures we could potentially experience in the future, is so incredibly massive.
Also feeling like the only way to make up for all the suffering people experienced in the past and are experiencing now and the suffering we inflict on animals is to fill the universe with good stuff. Create so much value and beautiful experiences and whatever else is good and positive and right, so that things like disease, slavery, the torture of animals seem like a distant and tiny blot in human history.
I mostly haven’t been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn’t a problem, the effective altruism movement seems lackluster in some ways.
I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn’t contribute to making them as ambitious or interested in exploring things outside “conventional EA” as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.
In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-”EA” sources to figure out what more effective interventions GiveWell might be missing perhaps because they’re somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer’s sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.
And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers.
Just some ranty thoughts about EA university groups without any suggestions. Don’t take too seriously.
Lots of university EA group organisers I have met seem to not be very knowledgeable about EA. A common type is someone who had gotten involved for social reasons and uses EA terms in conversations but doesn’t really get it. I can imagine this being offputting to the types of people these groups would like to join. Probably this is less of a problem at top universities though.
It also feels awkward to mention this to people because I know these group organisers have good intentions but they may be turning off cool people from engaging with the EA groups. It is even more awkward when community building is their part-time job. It’s not that they’re bad, it’s just that I wouldn’t be excited about a promising student first coming across EA by interacting with them.
Less confidently, it seems like in some groups there is too much of an emphasis on being very agenty right away and making big projects happen (especially community-building projects) compared to having a culture of intellectual curiosity and prioritising making interesting conversations happen that are not about community building. It also feels like for young people in EA, there are strong incentives to network hard, go to Berkeley, go to a bunch of retreats so you have cool important EA friends and all of this cuts down the time you can just sit down and learn important things, skill up and introspect.
There have been posts on the EA forum pointing at similar things so it feels like the situation might become better over the next year but this is just me recording what my experience has been like at times.
A few months ago I felt like some people I knew within community building were doing a thing where they believed (or believed they believed) that AI existential risk was a really big problem but instead of just saying that to people (eg: new group members), they said it was too weird to just say that outright and so you had to make people go through less “weird” things like content about global health and development and animal welfare before telling them you were really concerned about this AI thing.
And even when you got to the AI topic, had to make people trust you enough by talking about misuse risks first in order to be more convincing. This would have been an okay thing to do if those were their actual beliefs. But in a couple of cases, this was an intentional thing to warm people up to the “crazy” idea that AI existential risk is a big problem.
This bothered me.
To the extent that those people now feel more comfortable directly stating their actual beliefs, this feels like a good thing to me. But I’m also worried that people still won’t just directly state their beliefs and instead still continue to play persuasion games with new people but about different things.
Eg: one way this could go wrong is group organisers try to make it seem to new people like they’re more confident about what interventions within AI safety are helpful than they actually are. Things like: “Oh hey you’re concerned about this problem, here are impactful things you can do right away such as applying to this org or going through this curriculum” when they are much more uncertain (or should be?) about how useful the work done by the org is or how correct/relevant the content in the AI safety curriculum is.
I have a couple thoughts here, as a community builder, and as someone who has thought similar things to what you’ve outlined.
I don’t like the idea of bringing people into EA based on false premises. It feels weird to me to ‘hide’ parts of EA to newcomers. However, I think the considerations involved are more nuanced than this. When I have an initial conversation with someone about what EA is, I find it difficult to capture everything in a way that comes across as sensible. If I say, “EA is a movement concerned with finding the most impactful careers and charitable interventions,” to many people I think this automatically comes across as concerning issues of global health and poverty. ‘Altruism’ is in the name after all. I don’t think many people associated the word ‘altruism’ with charities aimed at ensuring that artificial intelligence is safe.
If I forefront concerns about AI and say, “EA is a movement aimed at finding the most impactful interventions… and one of the top interventions that people in the community care about is ensuring that artificial intelligence is safe,” that also feels like it’s not really capturing the essence of EA. Many people in EA primarily care about issues other than AI, and summarising EA in this way to newcomers is going to turn off some people who care about other issues.
The idea that AI could be a existencial risk is (unfortunately) just not a mainstream idea yet. Over the past several months, it seems like it has been talked about a lot outside of EA, but prior to that, there were very few major media organisations/celebrities that brought attention to it. So from my point of view, I can understand community builders wanting to warm up people to the idea. A minority of people will be convinced by hearing good arguments for the first time. Most people (myself included) need to hear something said again and again in different ways in order to take it seriously.
You might say that these are really simplistic ways of talking about EA, and there’s a lot more than I could say than a couple simple sentences. That’s true, but in many community building circumstances, a couple sentences is all I am going to get. For example, when I’ve run clubs fair booths at universities, many students just want a short explanation of what the group stands for. When I’ve interacted with friends or family members who don’t know what EA is, most of the time I get the sense that they don’t want a whole spiel.
I also think it is not necessarily a ‘persuasion game’ to think about how to bring more people on board with an idea—it is thinking seriously about how to communicate ideas in an effective way. Communication is an art form, and there are good ways to go about it and bad ways to go about it. Celebrities, media organisations, politicians, and public health officials all have to figure out how to communicate their ideas to the public, and it is often not as simple as ‘directly stating their actual beliefs.’ Yes, I agree we should be honest about what we think, but there are many different ways to go about this. for example, I could say, “I believe there’s a decent chance AI could kill us all,” or I could say, “I believe that we aren’t taking the risks of AI seriously enough.” Both of these are communicating a similar idea, but will be taken quite differently.
Thank you for sharing your thoughts here.
I found it really difficult to reply to this comment, partly because it is difficult for me to inhabit the mindset of trying to be a representative for EA. When I talk to people about EA, including when I was talking to students who might be interested in joining an EA student group, it is more similar to “I like EA because X, the coolest thing about EA for me is Y, I think Z though other people in EA disagree a bunch with my views on Z for W reason and are more into V instead” rather than trying to give an objective perspective on EA.
I’m just really wary of changing the things I say until it gets people to do the thing I want (sign up for my student group, care about AI safety, etc.) There are some situations when that might be warranted like if you’re doing some policy-related thing. However, when running a student group and trying to get people who are really smart and good at thinking, it seems like the thing I’d want to do is just to state what I believe and why I believe it (even and especially if my reasons sound dumb) and then hearing where the other person agrees or disagrees with me. I don’t want to state arguments for EA or AI safety to new members again and again in different ways until they get on board with all of it, I want us to collaboratively figure things out.
I hope more people, especially EA community builders, take some time to reevaluate the value of growing the EA movement and EA community building. Seems like a lot of community builders are acting as if “making more EAs” is good for its own sake. I’m much less sure about the value of growing the EA community building and more uncertain about whether it is positive at all. Seems like a lot of people are having to put in energy to do PR, make EA look good, fight fires in the community when their time could be better spent directly focusing on how to solve the big problems.
But I also think directly focusing on how to solve the big problems is difficult and “get more people into EA and maybe some of them will know how to make progress” feels like an easy way out.
My intuition is that having more people does mean more potential fires could be started (since each person could start a fire), but it also means each fire is less damaging in expectation as it’s diluted over more people (so to speak). For instance, the environmentalist movement has at times engaged in ecoterrorism, which is (I think pretty clearly) much worse than anything anyone in EA has ever done, but the environmentalist movement as a whole has generally weathered those instances pretty well as most people (reasonably imho) recognize that ecoterrorists are a fringe within environmentalism. I think one major reason for this is that the environmentalist movement is quite large, and this acts as a bulwark against the entire movement being tarred by the actions of a few.
I guess I make comments like the one I made above because I think fewer people doing EA community building are seriously considering that the actual impact (and expected impact) of the EA movement could be net negative. It might not be, and I’m leaning towards it being positive but I think it is a serious possibility that EA movement causes more harm than good overall, for example via having sped up AI timelines due to DeepMind/OpenAI/Anthropic and a few of the EA community members committing one of the biggest frauds ever. Or more vague things like EAs fuck up cause prioritisation, maximise really hard, and can’t course correct later.
The reason why EA movement could end up being not net harmful is when we are ambitious but prioritise being correct and having good epistemics really hard. This is not the vibe I get when I talk to many community builders. A lot of them seem happy with “make more EAs is good” and forget that the mechanism for EA being positively impactful relies pretty heavily on our ability to steer correctly. I think they’ve decided too quickly that “EA movement good therefore I must protect and grow it”. I think EA ideas are really good, less sure about the movement.
If EA is net harmful then people shouldn’t work directly on solving problems either, we should just pack up and go home.
I like EA ideas, I think my sanely trying to solve the biggest problems is a good thing. I am less sure about the current EA movement, partly because of the track record of the movement so far and partly because of intuitions that movements that are as into gaining influence and recruiting more people will go off track and it doesn’t to me look like there’s enough being done to preserve people’s sanity and get them to think clearly in the face of the mind-warping effects of the movement.
I think it could both be true that we need a healthy EA (or longtermist) movement to make it through this century and that the current EA movement ends up causing more harm than good. Just to be clear, I currently think that in the current trajectory, the EA movement will end up being net good but I am not super confident in this.
Also, sorry my answer is mostly just coming from thinking about AI x-risk stuff rather than EA as a whole.
EA didn’t cause the FTX fraud.
Huh, not sure what you mean. Sure seems like the FTX fraud was committed by prominent EAs, in the name of EA principles, using the resources of the EA movement. In as much as EA has caused anything, I feel like it has caused the FTX fraud.
Like, by the same logic you could be like “EA didn’t cause millions of dollars to be allocated to malaria nets”. And like, yeah, there is something fair about that, in the sense that it was ultimately individual people or philanthropists who gave money to EA causes, but at the end of the day, if you get to take some credit for Dustin’s giving, you also have to take some blame for Sam’s fraud.
I really, really, realllllly disagree. Saying that EA caused FTX is more like saying EA caused Facebook than the contrapositive. You should have a pretty firm prior that someone who becomes a billionaire does it primarily because they enjoy the immense status and prestige that being “the world’s richest U30” bestows on a person; likewise someone committing fraud to keep that status.
My primary character assessment at this point is that he was an EA who was also one of those flavors of people who become quasi-sociopaths when they become rich and powerful. Nothing in Sam’s actual, concrete actions seem to indicate differently, and indeed he actually spent the grander part of that money on consumption goods like mansions for himself and his coconspirators. Maybe he really was in it for the good, at the beginning, but I just can’t believe that someone making a late decision to start a ponzi scheme “for the greater good” would act like he did.
(Also, using the resources of the EA movement how, exactly? Seems to me like his fraud would have been just as effective had he not identified as an EA. He received investment and consumer funds because of the firm’s growth rate and Alameda’s generous trades, respectively, not because people were interested in contributing to his charities.)
I don’t really understand the distinction here. If a core member of the EA community had founded Facebook, recruiting for its leadership primarily from members of EA, and was acting throughout as a pretty prominent member of the EA community, I would also say that “EA had a substantial responsibility in causing Facebook”. But actual Facebook was founded before EA was even a thing, so this seems totally non-comparable to me.
And while I don’t really buy your character-assessment, I don’t really see what this has to do with the blame analysis. If EA has some prominent members who are sociopaths, we should take responsibility for that in the same way as we would take credit for some prominent members who are saints.
Separately, this part seems confidently wrong:
I am quite confident Sam spent <$100MM on consumption, and the FTX Future Fund has given away on the order of $400MM in grants, so this statement is off by around a factor of 2, and more likely by a full order of magnitude, though that depends a bunch on how you count political contributions and other stuff that’s kind of ambiguously charity vs. helpful to Sam.
Do you have links/evidence here? I remember counting less than 250M when I looked at their old website, not even accounting for some of the promised grants that presumably never got paid out.
I remember this number came up in conversations at some point, so don’t have any source. Plausible the number is lower by a factor of 2 (I actually was planning to change that line to reflect my uncertainty better and edit to “on the order of $400MM grants” since $600MM wouldn’t have surprised me, and neither would have $200MM).
I likewise don’t understand what you’re finding weird about my position? If Eliezer Yudkowsky robbed a bank, that wouldn’t make LessWrong “responsible for a bank robbery”, even if Eliezer Yudkowsky were in the habit of donating a proportion of his money to AI alignment organizations. Looking at the AU-EY grabbing the money out of the brown paper bag and throwing it at strippers, you would conclude he mostly did it for his own reasons, just like you would say of a robber that happened to be a congressman.
If we could look into AU-EY’s mind and see that he thought was doing it “in the name of EA”, and indeed donated the robbed funds to charity, then, sure, I’d freely grant that EA is at least highly complicit—but my point is that I don’t believe that was SBF’s main motivation for founding FTX, and think absent EA he probably had a similar outset chance of running such frauds. You can say that SBF’s being a conditional sociopath is immaterial to his reducing “the group of people with the EA sticker’s point total”, but it’s relevant for answering the more productive question of whether EA made him more or less likely to commit massive fraud.
Well, I guess recruiting from EA leadership is one thing, but to what extent did FTX actually benefit from an EA-affiliated talent pool? I reviewed most of the executive team during my manifold betting and didn’t actually come across anybody who I could find had a history of EA affiliation besides SBF (though you may know more than me).
I actually didn’t know that. Is this counting the Anthropic investment or did FTXFF really give-away give-away that much money?
That… seems really confused to me. Caroline was part of EA Stanford, almost all the early Alameda staff was heavily-involved EAs (including past CEA CEO Tara MacAulay). I know less about Nishad but he was definitely very heavily motivated by an EA philosophy while he was working at FTX, had read a lot of the LessWrong content, etc.
There is also this explicit quote by Nishad from before the collapse:
It seems really quite beyond a doubt to me that FTX wouldn’t have really existed without the EA community existing. Even the early funding for Alameda was downstream of a bunch of EA funders.
I mean, if Eliezer robbed a bank, I think I would definitely think the rationality community is responsible for a bank robbery (not “LessWrong”, which is a website). That seems like the only consistent position by which the rationality community can be responsible for anything, including good things. If the rationality community is not responsible for Eliezer robbing a bank, then it definitely can’t be responsible for any substantial fraction of AI Alignment research either, which is usually more indirectly downstream of the core people in the community.
Yeah, I guess I’m just wrong then. I’m confused as to why I didn’t remember reading the bit about Caroline in particular—it’s literally on her wikipedia page that she was an EA at Stanford.
FWIW I still don’t understand this perspective, at all. It seems bizarre. The word “responsible” implies some sort of causal relationship between the ideology and the action; i.e., Eliezer + Exposure to/Existence of rationalist community --> Robbed bank. Obviously AI Alignment research is downstream of rationalism, because you can make an argument, at least, that some AI alignment research wouldn’t have happened if those researchers hadn’t been introduced to the field by LessWrong et. al. But just because Eliezer does something doesn’t mean rationalism is responsible for it any more than Calculus or the scientific method was “responsible” for Isaac Newton’s neuroticisms.
It sounds like the problem is you’re using the term “Rationality Community” to mean “all of the humans who make up the rationality community” and I’m using the term “Rationality Community” to refer to the social network. But I prefer my definition, because I’d rather discuss the social network and the ideology than the group of people, because the people would exist regardless, and what we really want to talk about is whether or not the social network is +EV.
No, it implies a causal relationship between the community and the action. I don’t see any reason to constrain blame to “being caused by the ideology of the community”. If members of the community cause it, and the existence of the community had a pretty direct effect, then it sure seems like you should hold the community responsible.
In your last paragraph, you sure are also conflating between “ideology” and “social network”. It seems really clear that the social network of EA played a huge role in FTX’s existence, so it seems like you would agree that the community should play some role, but then for some reason you are then additionally constraining things to the effect of some ill-specified ideology. Like, can a community of people with no shared ideology literally not be blamed for anything?
No, I agree with you now that at the very least EA is highly complicit if not genuinely entirely responsible for causing FTX.
I don’t think we actually disagree on anything at this point. I’m just pointing out that, if the community completely disbanded and LessWrong shut down and rationalists stopped talking to each other and trained themselves not to think about things in rationalist terms, and after all that AU-Yudkowsky still decided to rob a bank, then there’s a meaningful sense in which the Inevitable Robbery was never “the rationality community’s” fault even though AU-Yudkowsky is a quintessential member. At least, it implies a different sort of calculus WRT considering the alternative world without the rationality community.
The Anthropic investment alone is $500MM, this is in addition to that.
Ok, that gives me some pause about his motivations… Probably enough to change my opinion entirely, but, still.
Alas, I do think this defense no longer works, given FTX, which seems substantially worse than all the ecoterrorism I have heard about (and IMO also the capabilities research that’s downstream of our work, like RLHF being the primary difference between Chat-GPT and GPT-3, but that’s a longer argument, and I wouldn’t want to bring it up as a commonly-acknowledged point)
I disagree with this because I believe FTX’s harm was way less bad than most ecoterrorism, primarily because of the disutility involved. FTX hasn’t actually injured or killed people, unlike a lot of ecoterrorism. It stole billions, which isn’t good, but right now no violence is involved. I don’t think FTX is good, but so far no violence has been attributed or even much advocated by EAs.
Yeah, doesn’t seem like a totally crazy position to take, but I don’t really buy it. I bet a lot of people would take a probability of having violence inflicted on them in exchange for $8 billion dollars, and I don’t think this kind of categorical comparison of different kinds of harm checks out. It’s hard to really imagine the scale of $8 billion dollars, but I am confident that Sam’s action have killed, indirectly via a long chain of actions, but nevertheless directly responsibly, at least 20-30 people, which I think is probably more than any ecoterrorism that has been committed (though I am not that confident about the history of ecoterrorism, so maybe there was actually something that got to that order of magnitude?)
IMO I think Ecoterrorism’s deaths were primarily the Unabomber, which was at least 3 deaths and 23 injuries. I may retract my first comment if I don’t have more evidence than this.
The unabomber does feel kind of weird to blame on environmentalism. Or like, I would give environmentalism a lot less blame for the unabomber than I would give us for FTX.
Some things I got wrong in the past:
In the past, I think I cared excessively about EA and me myself seeming respectable, and I think I was wrong about the tradeoffs there. As one concrete example of this, when talking to people about AI safety I avoided linking a bunch to blog posts even when I thought they were more useful to read and instead sent people links to more legitimate-seeming academic papers and people because I thought that made the field seem more credible. I think this and other similar things I did were bad.
I didn’t care about people’s character, and how much integrity they had enough in the past. I was very forgiving when I found out someone had intentionally broken a small promise to a colleague or acted in a manipulative way towards someone because in those cases, it seemed to me like the actual magnitude of the harm caused ended up being small relative to the impact of the person’s work. I now think those small harms add up and could be quite costly by adding mistrust and friction to interactions with others in the community.
I dismissed AI x-risk concerns in 2019 without making an honest attempt to learn about the arguments because it sounded weird. I think that was a reasonable thing to do given the social environment I was in. I think the really big mistake I made there though was not stating why I was unconvinced to my friends who had thought about it because I was afraid of seeming dumb due to not having read all the arguments already and because I was afraid of feeling pressured if I did try to argue about it.
I think I thought university EA group organising was the most useful thing for me to do. This seemed sensible to believe for a while but I think I stuck with it for too long because I had signed up to do it. If I had been honestly looking for evidence of the usefulness of the group organising activities I was doing, I would have realised a lot quicker that it was more useful for me to stop and do other things instead. I think this cost me ~100 hours.
There are different ways to approach telling people about effective altruism (or caring about the future of humanity or AI safety etc):
“We want to work on solving these important problems. If you care about similar things, let’s work together!”
“We have figured out what the correct things to do are and now we are going to tell you what to do with your life”
It seems like a lot of EA university group organisers are doing the second thing, and to me, this feels weird and bad. A lot of our disagreement about specific things, like how I feel it is icky to use prepared speeches written by someone else to introduce people to EA and bad to think of people who engage with your group in terms of where they are in some sort of pipeline, is about them thinking about things in that second frame.
I think the first framing is a lot healthier, both for communities and for individuals who are doing activities under the category of “community building”. If you care deeply about something (eg: using spreadsheets to decide where to donate, forming accurate beliefs, reducing the risk we all die due to AI, solving moral philosophy, etc) and you tell people why you care and they’re not interested, you can just move along and try to find people who are interested in working together with you in solving those problems. You don’t have to make them go through some sort of pipeline where you start with the most appealing concepts to build them up to the thing you actually want them to care about.
It is also healthier for your own thinking because putting yourself in the mindset of trying to persuade others, in my experience, is pretty harmful. When I have been in that mode in the past, it crushed my ability to notice when I was confused.
I also have other intuitions for why doing the second thing just doesn’t work if you want to get highly capable individuals who will actually solve the biggest problems but in this comment, I just wanted to point out the distinction between the two ways of doing things. I think they are distinct mindsets that lead to very different actions.
My defense of posting pseudonymously:
It does feel like I’m defecting a little bit by using a pseudonymous account. I do feel like I’m somewhat intentionally trying to inject my views while getting away with not paying the reputational cost of having them.
My comments use fewer caveats than they would if I were posting under my real name, and I’m more likely to blurt things I currently think without spending lots of time thinking about how correct I am. I also feel under little obligation to signal thoughtfulness and niceness when not writing using my name. Plausibly this contributes to lowering the quality of the EA forum but I have found it helpful as a (possibly temporary?) measure to practise posting anything at all. I think that I have experiences/opinions that I want others on the EA forum to know about but don’t want to do the complicated calculation to figure out if it is worth posting them under my real name (where a significant part of the complicated calculation is non-EA people coming across them while searching for me on the internet).
I also prefer the situation where people in the EA community can anonymously share their controversial views over the situation where they don’t say anything at all because it makes it easier to get a more accurate pulse of the movement. I mostly have spent lots of time in social groups where saying things I think are true would have been bad for me and I do notice that it did cause my thinking to be a bit stunted as I avoid thinking thoughts that would be bad to share. Writing pseudonymously feels helpful for noticing that problem and practising thinking more freely.
Also idk pseudonyms are really fun to use, I like the semi-secret identity aspect of using them.
Yeah, pseudonyms are great. There’s been recent debates about people using one-off burner accounts to make accusations, but those don’t reflect at all on the merits of using durable pseudonyms for general conversation.
The degree of reputation and accountability that durable pseudonyms provide might be less than using a wallet name, but it’s still substantial, and in practice it’s a perfectly sufficient foundation for good discourse.
As someone who is pretty far on the anti-pseudonym side of the debate, I think your point about caveats and time-saved is a real concern
Idk this just occurred to me though.. what about norms of starting a comment with:
epistemic status: blurted
or
epistemic status: halp I just felt someone needed to say this thing, y’all pls help me decide if true
And couldn’t that be fun too, maybe? If you let it be so?
Yeah, that does seem useful.
I still think I’ve found being pseudonymous more useful than writing under my name. It does feel like I’m less restricted in my thinking because I know there are no direct negative or positive effects on me personally for sharing my thoughts. So for example, I’ve found it easier to express genuine appreciation for things or people surprisingly. Perhaps I’m too obsessed with noticing how the shape of my thoughts changes depending on how I think they will be perceived but it has been very interesting to notice that. Like it genuinely feels like there are more thoughts I am allowed to think when I’m trying on a pseudonym (I think this was much starker a few months ago so maybe I’ve squeezed out most of the benefit by now).
Maybe folks funding community building at universities should make the option of: “doing paid community building stuff at the start of term for a couple of months and then focusing on personally skilling up and learning more things for the rest of the year” more obvious. For a lot of group organisers, this might be a better idea than consistently spending 10 hours/week group organising throughout the year.
Since EAG: SF is coming up and not all of us will get to talk to everyone it would be valuable for us to talk to, it would be cool for people to document takeaways from specific conversations they had with people (with those people’s consent) rather than just takeaways of the entire conference, which people have posted on the forum in the past.
This is an example of what I meant: https://www.lesswrong.com/posts/aan3jPEEwPhrcGZjj/nate-soares-life-advice
I wish there were more nudges to make posts like that after EAGs
From https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future#Thoughts_on_the_balance_of_positive_and_negative_value_in_current_lives:
I share these intuitions and it is a huge part of the reason reducing x-risk feels so emotionally compelling to me. It would be so sad for humanity to die out so young and unhappy never having experienced the awesome possibilities that otherwise lie in our future.
Like, the difference between what life is like and the sorts of experiences we can have right now and just how good life could be and the sorts of pleasures we could potentially experience in the future, is so incredibly massive.
Also feeling like the only way to make up for all the suffering people experienced in the past and are experiencing now and the suffering we inflict on animals is to fill the universe with good stuff. Create so much value and beautiful experiences and whatever else is good and positive and right, so that things like disease, slavery, the torture of animals seem like a distant and tiny blot in human history.