Yeah, it’s not perfect… I’d like to be able to silently block people too, in case I no longer want to hang out with them. But hey, it’s open source, maybe we can improve it.
RomanHauksson
“Small World”: website that shows you which city your friends are in
The high success rate almost makes me think CE should be incubating even more ambitious, riskier projects, with the expectation of a lower success rate but higher overall EV. Very uncertain about this intuition though, would be interested to hear what CE thinks.
It would be great to have data about gaps in professional skills between what EAs are training up in and what EA organizations find most useful and neglected. I’ve heard that there’s a gap in information security expertise within the AI safety field, but it would be nice to see data to back this up before I commit to self-studying cybersecurity. Maybe someone could do a survey of EA organization managers asking them what skills they’re looking for and what roles they’ve been having a hard time filling, as well as a survey of early-career EAs asking them what skills they have and what they’re learning. We could also do this survey regularly and observe trends.
RomanHauksson’s Shortform
I would like to emphasize that when we discuss community norms in EA, we should remember the ultimate goal of this community is to improve the world / humanity’s future as much as possible, not to make our lives as enjoyable as possible. Increasing the wellbeing of EAs is instrumentally useful for increased productivity and attracting more people to make sacrifices like “donate tens of thousands of dollars” or “change your career plan to work on this problem”, but ultimately the point isn’t to create a jolly in-group of ambitious nerds. For example, if the meshing of polyamorous and professional relationships causes less qualified candidates to earn positions in EA organizations, this may be net negative, even if the polyamorous relationships make people really happy.
I made a similar deck a few months ago, and there might be some overlap: https://github.com/RomanHN/CFAR_jargon
Hi Isaac! We’re in a similar situation: I’m 19, studying Computer Science at a mid-tier university, with a strong interest in AI alignment (and EA in general). Have you gone through the 80,000 Hours career guide yet? If not, it should give you some clarity. It recommends that we just focus on exploration and gaining career capital right now, rather than choosing one problem area or career path and going the whole hog.
Congratulations on the launch! This is huge. I have to ask, though: why is the ebook version not free? I would assume that if you wanted to promote longtermism to a broad audience, you would make the book as accessible as possible. Maybe charging for a copy actually increases the number of people who end up reading it? For example, it would rank higher on bestseller lists, attracting more eyes. Or perhaps the reason is simply to raise funds for EA?
Does anyone have any advice on how I can use language models to write nonfiction text better? For making a specific piece of text better, but also for learning how to write better in the long term. Maybe a tool like Grammarly but more advanced? It would give critiques of the writing I have so far, ask questions, give wording suggestions, point out which sentences are especially well-written, et cetera.