Any views stated are my own only (not those of the organisation I work for)
Peter4444
[Question] What are the best EA materials for busy people?
Quick note to say that I appreciate short, readable updates like this, and I’m excited to hear about the progress of your org!
I haven’t yet read all of this (short on time) but I wanted to flag that what I have read seems deeply thoughtful, I’m disappointed that this post hasn’t had more engagement, and I for one am grateful you took the time and effort to write it.
This is very exciting!
Having worked with Jona at the EA Consulting Network, I’ve experienced his clarity of thought and communication, strong prioritisation and impact mindset, and keenness to coach me with specific feedback I could put into action.
Best of luck for 2023!
I have mixed feelings because I understand what the post is getting at but think this is a good example of a person writing their thoughts without considering how others will perceive them. E.g. there is no need to say ‘quality of person’ to get the point across, but doing so might make more sense if the author’s mental process is simply ‘writing down, as accurately as possible, what I believe’ and there’s no flagging of how a message might be received.
This problem seems common to me in the rationality community. Not meaning to dig at Thomas in particular, only to point it out, since I think it could reduce the diversity of the EA community along important lines.
If you state an opinion, it’s thought that opinion should be scrupulously challenged.
If you state a feeling you had, especially a difficult or conflicted one, it’s thought that it should be welcomed and certainly not challenged.
Individually, these attitudes make sense, but together I would expect that they will make Forum posts much more focused on emotional reactions than careful and unemotional pieces.
To clarify, I want both and think emotional reactions can be very important. But at least once, I’ve seen a detailed but unemotional post buried under a less well thought through post describing someone’s emotional reaction to a similar issue. Perhaps we should be welcoming of posts that try hard to do careful and rational analysis, even if they seem/are misguided or unsuccessful.
Socratic CODES to help others unlock the solutions to their problems
(Intersubjective evaluation—the combination of multiple people’s subjective evaluations—could plausibly be better than one person’s subjective evaluation, especially if of themselves, assuming ‘errors’ are somewhat uncorrelated.)
Linking to Spencer Greenberg’s excellent short talk on intrinsic values:
Spencer claims, among other things, that
it’s a cognitive fact that you value multiple different things
if you pretend otherwise, e.g. because you feel it’s stigmatised to act based on any consideration but impartial impact, you will fool yourself with ‘irrational doublethink’ of the type described in this post.
Terms to clarify when discussing EA
Thanks for sharing, and congrats! I especially enjoyed reading through the timeline. (I generally like & find it helpful to read concrete, relevant info, especially in posts more abstract than this one.)
Thanks so much for sharing your thoughts in such detail here :)
Friends or relatives in Oregon? Please let us know! Updates & actions to help Carrick win
Not sure about best places, though I have a friend who’s working on setting up an EA community in Tulsa, Oklahoma.
It might be worth pointing out that, in my experience, EAs seem quite unusual in tending to talk about EA almost all the time, e.g. at parties and other events as well as at work. I’ve often found this inspiring and energising, but I can also understand how someone could feel overwhelmed by it.
Thanks. Yes, that was the survey I mentioned.
Compiling resources comparing AI misuse, misalignment, and incompetence risk and tractability
Great post!
The fact that some orgs already say things like ‘knowledge of effective altruism is preferred but not essential’ probably doesn’t solve this issue. I can imagine that many jobs are competitive enough that you could only reasonably have a shot if you ticked certain boxes related to EA knowledge/experience, even if you might be a better and more-aligned candidate but don’t have obvious evidence.
I think there’s information value from doing lots of 10-minute speed-interviews, at least sometimes, so that we can get a sense of how many competent and EA-aligned people might be off EA orgs’ radar.
p.s. I can confirm that Evan has been an excellent volunteer for the EA & Consulting Network.
Thank you very much for this post. I thought it was well-written and that the topic may be important, especially when it comes to epistemics.
I want to echo the comments that cost-effectiveness should still be considered. I have noticed people (especially Bay Area longtermists) acting like almost anything that saves time or is at all connected to longtermism is a good use of money. As a result, money gets wasted because cheaper ways of creating the same impact are missed. For example, one time an EA offered to pay $140 of EA money (I think) for me for two long Uber rides so that we could meet up, since there wasn’t a fast public transport link. The conversation turned out to be a 30-minute data-gathering task with set questions that worked fine when we did it on Zoom instead.Something can have a very high value but a low price. I would pay a lot for potable liquid if I had to, but thanks to tap water that’s not required, so I would be foolish to do so. In the example above, even if the value of the data were $140, the price of getting it was lower than that. After taking into account the value of time spent finding cheaper alternatives, EAs should capture the surplus whenever possible.
As a default, I would like to see people doing a quick internal back-of-the-envelope calculation and scan for cheaper alternatives, which could take a minute or five. Not only do I think this is cost-effective; I think it helps with any issues of optics and alienation as well, because you only do crazy-expensive-looking things when there’s not an obvious cheaper alternative.
It would also be nice to have a megathread of cheaper alternatives to common expenditures.
In my experience (interviewed 2020⁄21), Case In Point is no longer useful, except perhaps to skim through for some ideas.
My understanding is that, when consulting firms started using case interviews, they could get away with using a handful of standardised formats (profitability diagnosis, M&A, etc.). Case In Point provides rigid frameworks to apply to those standardised formats. But the firms have got wise to this. They used these frameworks to test thinking but got ‘framework monkeys’ using canned frameworks, so now they often give cases that don’t fit those frameworks and/or expect candidates to produce better / more specific insights.
I think the most helpful resources are those that focus on developing the skills being tested in case interviews. In my opinion, improving at the actual skills firms look for is a robust way to do well, is more fun, and is more useful for consulting and life.
I’d recommend the free course and articles from craftingcases.com, as well as casecoach.com. (BCG London gave all interviewees free CaseCoach access.)
Adding more that I’ve found or been told about:
- ‘How to (actually) change the world’ course (5-10 min videos): https://www.effectivealtruism.org/virtual-programs/how-to-actually-change-the-world
- Potentially some content from here: https://library.globalchallengesproject.org/
- AI Safety Fundamentals audio: https://preview.type3.audio/playlists/agi-safety-fundamentals-alignment
- More newsletters:
1. Global Development & Effective Altruism
2. Monthly Overload of Effective Altruism
3. Impactful Animal Advocacy Community Newsletter