Hi! I’m an undergraduate at University College Dublin studying computational social science. Contact me for any reason, but please do so through Twitter, email, or LinkedIn instead of forum DMs :)
Ines
EA can sound less weird, if we want it to
Announcing the EA Merch Store!
AGI Safety Communications Initiative
Having a standard word-of-mouth goal
Donation Matching Opportunities
[Question] What would you like to see in an EA merch store?
[Cause Exploration Prizes] Expanding communication about AGI risks
The reason we are not charging a markup is because it could lead to tax-related complications, but this may change in the future.
Is there a newsletter or somewhere to subscribe for updates?
Ability to include a poll in when you make a question post, à la Twitter! I know this feature has been suggested before, in response to which Aaron Gertler made the Effective Altruism Polls Facebook group, but it seems to have plateaued at 578 members after 2.5 years. Response rates in the forum would probably be much higher.
- 18 Aug 2022 18:46 UTC; 1 point) 's comment on EA forum suggestion: In-line comments(Similar to google docs commenting) by (
No, that’s not what I mean. I mean we should use other examples of the form “you ask an AI to do X, and the AI accomplishes X by doing Y, but Y is bad and not what you intended” where Y is not as bad as an extinction event.
This is a good point, and I thought about it when writing the post—trying to be persuasive does carry the risk of ending up flatteringly mischaracterizing things or worsening epistemics, and we must be careful not to do this. But I don’t think it is doomed to happen with any attempts at being persuasive, such that we shouldn’t even try! I’m sure someone smarter than me could come up with better examples than the ones I presented. (For instance, the example about using visualizations seems pretty harmless—maybe attempts to be persuasive should look more like this than the rest of the examples?)
I think a bottleneck to this is often that having the explicit goal of trying to make the members of your EA group become friends can feel inorganic and artificial. The activities you suggest seem like a good way of doing this in a way that doesn’t feel forced, and I’ll probably be using some of these ideas for EA Ireland. Thanks for writing this wholesome post up!
This may be useful for Future Perfect as a case study: The 12 most-read Future Perfect pieces of 2021
It’s an MVP—we will upgrade to a better website in due time. Hopefully the release of more products will mean there will be more options to suit a greater variety of tastes. If you have any ideas for designs or aesthetic styles that would appeal to you, I encourage you to submit them.
Unfortunately the products cannot get any cheaper than they are as we cannot operate the store without using a service like Printful, and products may in fact go up in price in the future if we change the funding model.
Try again now!
I’ve added you to a list of relevant people :)
This seems very useful. Personally, I would also be interested in:
Rate of improvement: What level of skill or advancement would be considered poor, mediocre, and exceptional after X months/hours? This would be especially valuable for careers involving soft skills, in which it is often hard to know how you should be measuring your performance or what a good rate of improvement/advancement looks like. (For AI research, it could be something like, “after X hours of learning this concept, it would be considered poor/fine/great to score somewhere in the Y percentile of this machine learning competition”.)
Related careers: If you mostly enjoy a career except for one or two specific components, what are other similar careers that may be a good fit?
Hm, this may be right. We will change it if this comment gets enough upvotes. Also, if you had the same issue as Dan (shipping was too expensive), try again now!
I think this is a great idea! I worry that calling it The Altruist might be off-putting for some readers as it could be read as self-congratulatory