Hi! I’m an undergraduate at University College Dublin studying computational social science. Contact me for any reason, but please do so through Twitter, email, or LinkedIn instead of forum DMs :)
Ines
I agree with you—I generally come to the forum looking for more thoughtful content, and there are already several EA Facebook groups for which at least the meme post would have been more appropriate. I think the writing contest is probably fine though.
This may be useful for Future Perfect as a case study: The 12 most-read Future Perfect pieces of 2021
I think this is a great idea! I worry that calling it The Altruist might be off-putting for some readers as it could be read as self-congratulatory
I see this as one of those problems that could be addressed with a “trickle-down solution”: Once the top universities and/or academic journals change their policies, it is likely that all the rest will copy them and follow suit. I don’t know if there is any type of “lobbying” we can do to influence these institutions but it seems like a potentially straightforward and tractable path.
Are they familiar with Charity Entrepreneurship? They research high-impact nonprofit ideas (which you can find on their website) and they have an incubation program
Team smarter than you—join a team where most people are smarter than you
Couldn’t you argue that your marginal impact is less here than in a case where you’re the smartest in the team?
Gotcha
I think many of these benefits could be achieved by local EA groups working on a high-impact project together (maybe like those in Impact CoLabs?). Some people in my local EA group have started working on AI research together and that seems to be going pretty well. I worry EA groups doing community service in an official EA capacity may muddy the waters about what effective altruism stands for.
No, that’s not what I mean. I mean we should use other examples of the form “you ask an AI to do X, and the AI accomplishes X by doing Y, but Y is bad and not what you intended” where Y is not as bad as an extinction event.
Hm, yeah, I see where you’re coming from. Changed the phrasing.
This is a good point, and I thought about it when writing the post—trying to be persuasive does carry the risk of ending up flatteringly mischaracterizing things or worsening epistemics, and we must be careful not to do this. But I don’t think it is doomed to happen with any attempts at being persuasive, such that we shouldn’t even try! I’m sure someone smarter than me could come up with better examples than the ones I presented. (For instance, the example about using visualizations seems pretty harmless—maybe attempts to be persuasive should look more like this than the rest of the examples?)
Oh, I like this idea! And love WaitButWhy.
Yes, this is true and very important. We should by no means lose sight of existential risks as a discerning principle! I think the best framing to use will vary a lot case-by-case, and often the one you outline will be the better option. Thanks for the feedback!
Is there a newsletter or somewhere to subscribe for updates?
I think a bottleneck to this is often that having the explicit goal of trying to make the members of your EA group become friends can feel inorganic and artificial. The activities you suggest seem like a good way of doing this in a way that doesn’t feel forced, and I’ll probably be using some of these ideas for EA Ireland. Thanks for writing this wholesome post up!
The closest thing I can think of is the 80,000 Hours LinkedIn group
Ability to include a poll in when you make a question post, à la Twitter! I know this feature has been suggested before, in response to which Aaron Gertler made the Effective Altruism Polls Facebook group, but it seems to have plateaued at 578 members after 2.5 years. Response rates in the forum would probably be much higher.
- 18 Aug 2022 18:46 UTC; 1 point) 's comment on EA forum suggestion: In-line comments(Similar to google docs commenting) by (
I’ve added you to a list of relevant people :)
Thanks!
This seems very useful. Personally, I would also be interested in:
Rate of improvement: What level of skill or advancement would be considered poor, mediocre, and exceptional after X months/hours? This would be especially valuable for careers involving soft skills, in which it is often hard to know how you should be measuring your performance or what a good rate of improvement/advancement looks like. (For AI research, it could be something like, “after X hours of learning this concept, it would be considered poor/fine/great to score somewhere in the Y percentile of this machine learning competition”.)
Related careers: If you mostly enjoy a career except for one or two specific components, what are other similar careers that may be a good fit?