Thanks for your list and please do!
Siao Si
Thank you!
I’ve sometimes thought about if ‘immortality’ is the right framing, at least for the current moment. Like AllAmericanBreakfast points out, I think that anti-ageing research is unlikely to produce life extensions in the 100x to 1000x range all at once.
In any case, even if we manage to halt ageing entirely, ceteris paribus there will still be deaths from accidents and other causes. A while ago I tried a fermi calculation on this, I think I used this data (United States, 2017). The death rate for people between 15-34 is ~0.1%/year, this rate of death would put the median lifespan at ~700 years (Using X~Exp(0.001)).
Probably this is an underestimate of lifespan—accidental death should decrease (safety improvements, of which self-driving cars might be significant given how many people die of road accidents), curing ageing might have positive effects on younger people as well, and other healthcare improvements should occur, and people might be more careful if they’re theoretically immortal(?). However, I think this framing poses a slightly different question:
Do we prefer that more people:
Live shorter lives and die of heart disease/cancer/respiratory disease*, or
Live (possibly much) longer lives and die of accidents/suicide/homicide
I don’t know how I feel about these. I think in the theoretical case of going immediately from current state to immortality I’d be worried about Chesterton’s-fence-y bad results—not that someone put ageing into place, but I’d expect surprising and possibly unpleasant side effects of changing something so significant**.
*I inferred from the data I linked above that heart disease and cancer are somewhat ageing-related, I’m not sure if this is true
**The existence of the immortal jellyfish Turritopsis dohrnii, implies that a form of immortality was evolvable, which in turn might imply that there’s some reason evolution didn’t favour more immortal things/things that tended slightly more towards immortality.
(disclaimer that I talked to Sasha before he put up this post) but as a ‘random EA person’ I did find reading this clarifying.
It’s not that I believed that “orthogonality thesis the reason why AGI safety is an important cause area”, but that I had never thought about the distinction between “no known law relating intelligence and motivations” and “near-0 statistical correlation between intelligence and motivations”.
If I’d otherwise been prompted to think about it, I’d probably have arrived at the former, but I think the latter was rattling around inside my system 1 because the term “orthogonality” brings to mind orthogonal vectors.
Thanks for writing this up!
What are the use cases you envision for terms like these ones?
I appreciate the concern that people might feel deceived when finding out that the movement doesn’t look quite like what they were expecting, but I think this might be better addressed by pointing out to new people EA is a broad group with a variety of interests, values, and attitudes.
I’m concerned that splitting up EA according to aesthetics/subcultures might be harmful, and I think it should be handled with care. The human tendency to look for identity labels and subgroups to belong to is very strong, and subgroup identification can create insularity and group polarization, which are probably things we should avoid. It could also result in people altering beliefs in order to fit an identity framing as Lizka describes in the case of longtermism here.
Any large coalition will have variation across the group, and terms that describe subgroups can be helpful. However, while describing EA in terms of cause area or even terms like ‘longtermist’ give me a strong idea what a person or group might be interested in and what might be valuable to them, I’m not sure what information the aesthetic categories give me as a descriptor.
There’s also a lot of complexity in the connections between groups and ideas in EA, and I think this is an aspect of EA which should be encouraged and emphasized, not flattened into categories.
I wanted to describe my personal experience in case it shifts anyone like me towards applying. I was accepted, received travel support, and went to EAG London last month.
Initially, I considered the likelihood that I would be accepted and be able to go very low: I didn’t think I was involved enough in EA and I didn’t think it made sense for me to receive travel support to go as I live very far from London. I also didn’t think that I ‘deserved’ to go: I reasoned that I shouldn’t take a spot from someone more engaged in EA or could provide more value to other attendees. I probably wouldn’t have applied if not for having a personal connection with someone else who applied.
Nearly every interaction I had at the conference was positive. Many people I spoke to were happy to share about their area even if I had little prior understanding, and I was surprised to find I had ideas and perspectives that were unique/might not have surfaced in conversation had I not been there.
As a young person, I have never felt more respected as a full person and equal with meaningful ideas to contribute. EAG is intense—it can be near constant interaction with a lot of people, focused on the most important problems in the world. But going to EAG made me feel like a ‘part of’ EA, and gave me a lot more confidence to make decisions, to try things, to reach out to people.
If you’re like me and concerned about not being qualified or not having done enough, let the organisers judge, and consider the possibility that EAG might give you the ability to do more later.
Thanks for writing this up!
I’m not sure about the implications, but I just want to register that deciding to roll repeatedly, after each roll for a total of n rolls, is not the same as committing to n rolls at the beginning. The latter is equivalent in expected value to rolling every trial at the same time: the former has a much higher expected value. It is still positive, though.
A suggestion that might preserve the value of giving higher karma users more voting power, while addressing some of the concerns: give users with karma the option to +1 a post instead of +2/+3, if they wish.
Thanks for this! It wouldn’t have occurred to me to consider the decline of footbinding as a case study of moral progress,
I think you’ve probably noted this and perhaps didn’t mention it because it’s not directly relevant to the main questions you’re investigating, but I think it’s important to note for someone who only reads this post that having bound feet was a status symbol—it began among the social elite and spread over time to lower social classes, remained a status symbol because families who needed girls to conduct agricultural labor could not partake in the practice, and in practice an incentive to do it was to increase marriage prospects.
Thanks for writing this post! I think promoting diversity in EA is incredibly important and I appreciate your contribution to it.
However, I get a feeling here that you’ve started with an underlying assumption that “EA should cater to women”, which I don’t see the argument for. Certainly, if there’s a stark lack of women throughout EA, I’d feel that there’s a problem that needs to be specifically addressed—but I don’t think this is the case.
You present information about the academic fields that correlate with participation in EA, and note that there’s a gender disparity that matches the one we seem to observe in EA. To me, this seems like evidence that there isn’t a problem within EA, but a result of broader, more complicated dynamics elsewhere. On the other hand, the other demographic data you present on other minorities seems like a more significant issue.
For instance, the rest of the 80k article you cite is clear about the fact that the framework isn’t applicable to everyone, and I think the choice of whether to have children is just one of many possible reasons the framework might not strictly apply to a person. And while demographic data shows that the work of having children affects women disproportionately, non-women who intend to parent would also need to consider the same unanswered questions.
Because of these dynamics, I don’t think that the claim you’ve made here, that more resources should be directed in a way that addresses the gender disparity, is substantiated.
I’m curious to hear from where you’ve gotten the sense that the EA community uses a ‘male-default position’ - I’ve never felt this.
(As a side note, if you haven’t heard of them, there’s magnifymentoring—previously WANBAM)
Perhaps you’ve seen these things already if you’re thinking about having kids, but Julia Wise and Jeff Kaufman have written about their decision to be parents and their experiences parenting extensively. The stuff I could find that addresses the question of making the decision:
I think it would be better if agree/disagree voting didn’t follow the typical karma rules where different users have different amounts of karma. As it stands I often don’t know how many people expressed agreement vs. disagreement, which feels like the information I actually want, and it doesn’t make intuitive sense that one forum user might be able to “agree twice as much” as another with a comment.
It seems like setting ourselves up for selection bias if we take listen only to people with experience with “how bad journalism gets”. We also want to get advice from people with good experiences with journalism, because they might be doing things that make them more likely to get good experiences, and presumably know about how to continue to go about having good experiences, having gotten them.
There may be some parts of EA where the media don’t start out nicely inclined to the area at hand, but I think on many topics we might care to engage with the media on, they likely would start out neutral or positive on anything.
We might take the points here with more weight if they are from someone with extensive experience, but a lack of experience doesn’t invalidate the reasoning here.
Consider using the EA Gather Town for your online meetings and events
Should be fixed now, thanks for highlighting.
There’s EA VR—they’re listed as inactive but I think there’s some activity in their discord. Look forward to seeing you around and feel free to ping anyone with ‘EAGather Steward’ in their name for a tour :)
All AGI Safety questions welcome (especially basic ones) [July 2023]
I just looked at the application for the role of content specialist for CEA, which seems to involve a lot of working on this forum.
I noticed that if one indicates they have been personally referred by someone ‘involved in effective altruism’, one is given the option to skip ‘the rest of the application’ - which seems like the majority of the substantive information one is asked to give.
This seems overtly nepotistic, and I can’t think of a good reason for it—can anyone give one?
Oh I see, thanks! - I didn’t realise this because the statement that appears after indicating you’ve been personally referred is: “Since you were referred to this position, the rest of the application is optional” which makes it sound like it wouldn’t be optional if you weren’t referred.
I don’t think ignoring animal feed makes sense here, I can’t find the source at the moment but the vast majority of Peruvian anchoveta is reduced to fish meal and exported to countries like China to serve as feed for land animals and even species of larger fish, the incentive structure is such that factories that are supposed to produce anchoveta derivatives for direct human consumption illegally produce fish meal.
I think increased consumption of fish sauce over other animals would be moving down the food chain and result in a net decrease in animal suffering, not to mention advantageous for fishing-reliant economies there.