EA Librarian Update
The EA librarian is a service run by the community health team at the Centre for Effective Altruism as part of their work on community epistemic health. You can submit questions related to EA and one of our librarians will research and respond to you with their thoughts. You can read more about the service in our original post here.
We have been running the service for about 8 weeks now and I wanted to give a brief update on our plans and highlight just a few questions that I was especially excited to see. If you submitted a question to our service, thank you so much! Almost all of our questions came through the private form.
This post consists of a few questions I was especially excited to see, in particular, questions that:
Ask for clarification of common EA terms.
Look for points of disagreement (cruxes) between common views in EA.
Ask for an unfamiliar space to be mapped out.
It also discusses types of questions that we didn’t see much but would like to see more of such as:
‘Foolish’ or embarrassing questions.
Questions that could highlight places where the user disagrees with some school of thought in EA (cruxes).
Some questions I thought were particularly great
Orthogonality?
“What does “orthogonality” and “orthogonal” mean when used by EAs? According to google it just means that two lines are perpendicular, but from the blurbs, I’ve read about the “orthogonality thesis” there is no mention of linear algebra, and instead mentions interdependence between intelligence and motivation …. ”
I think it is often easy to miss confusions of this type where some concept has the same name as a related but different concept commonly used in another field [1]. I also think when terms are commonly used within a subculture it can be difficult to notice that you don’t actually understand a term (and instead automatically infer the general sentiment from context). I used to find it a bit embarrassing to ask for clarification on commonly used terms.
Overall I think there are lots of places motivated reasoning can kick in, and make you feel like you understand things when you don’t. I can see why someone might not want to ask for clarification in this kind of case and I am really happy to see questions of this type being asked.
Arguments against working on x-risks
“What are the strongest arguments against working on existential risk?”
I see cause prioritisation as one of the most valuable insights from EA, I could imagine this type of questing being on the mind of intro/in-depth fellows. I do think that there are probably already good posts on this topic, but I am happy for us to summarise this type of work or direct people towards good posts. I would love it if the EA librarian prompted more people to critique their current views by identifying the cruxes between their view and the opposing view. I think that just asking for strong arguments against your position is a step in the right direction.
Types of longtermism
“What are the different types of longtermisms?
I have heard of:
Strong / Weak
Broad / Targeted
Patient / Urgent
What do they mean?
How do they relate to each other? (is every combination possible?)
***Is there a clear and short resource laying out what they all mean? ***
Are there other distinctions?”
Longtermism is a pretty new idea and is certainly not mainstream philosophy. Despite this, most active members of the EA community are at least familiar with a subset of the ideas, and many share the core intuition of valuing future generations similarly to present-day ones. As people are still doing foundational work in this space new terms/ideas seem to be created quickly. The unusually close proximity of EA to this field means that discourse can easily assume that participants are familiar with only recently invented ideas.
I like how direct this question is: “here are some things that I don’t really understand, I also want to understand how they relate to each other”. I would be excited to see more of these questions that want to map out an unfamiliar space.
Questions that we didn’t see much of but want to see more
Cruxfinding
As I said in the previous post, one of the ways people commonly report improvement in their judgement is through discussion with people with good judgement. This is mostly anecdotal but at least it seems like it is a common anecdote. Unfortunately, I think that there are only a handful of organisations/communities where you are likely to get exposure to these kinds of discussions. The aspect of this service that I am most excited about is the one that emulates the important learning component of such discussions. I think there may need to be more iteration on the service to fully realise this, but I hope that we are moving in the right direction.
I think one way to elicit questions that might lead to these kinds of interactions is by looking for places where you disagree with other thoughtful people. You could ask a question like “I have perspective A but it seems like some other people have perspective B, could you summarise the main considerations that might lead to someone adopting view B instead of A?” Alternatively you may be able to find the thing that would make you change your mind (the ‘crux’) yourself and then ask a question in relation to that like “I have view A, I realised that I would adopt view B if C were true, what do you think about C?”
I’d be excited to see questions about cruxes that came out of a similar process, or the use of the EA librarian to find these kinds of cruxes. I also think these kinds of questions could provide more opportunities for librarians to illustrate with their answers how they reason.
‘Foolish’ or ‘embarrassing’ questions
I think that the perceived bar to submitting a question may still be too high. We received quite a few intricate questions related directly to people’s work. We are really happy to answer these questions where we can (or signpost resources that do a better job of this) but I think that our service is especially well suited to more fundamental/basic questions.[2] I worry that I created an impression that the librarians are to be used only for really important and technical questions but I want to encourage a broader set of questions if that is helpful).[3]
If you feel confused by something and normally would be put off from asking your question directly, I hope I can nudge you to ask your question using this service. If you are not sure whether you should ask a question, you are welcome to email me at caleb.parikh@centreforeffectivealtruism.org.
Maybe the average Forum user doesn’t have these kinds of questions. I would find this surprising, as I have plenty of conversations with EAs in EA organisations where one of us asks for further clarification of some assumed point.
In fact, I think that people who are regularly exposing themselves to new ideas are likely to generate more of these basic questions. That said, I am experimenting with adapting the service to better engage with newer EAs, particularly ones that don’t normally use the Forum mostly through integrating the service with virtual and in-person programs.
Thanks to Nicole Ross, Julia Wise and Eve McCormick for proofreading many drafts of this post, all errors are my own.
- ^
Another example could be the use of utility in economics and philosophy (although I think that these are much more closely related than the different uses of orthogonality in the above question).
- ^
To pick an example that doesn’t seem related to EA from my personal life, I am a big fan of the musician Jacob Collier (JC) but my brothers really dislike listening to him. A recent discussion went something like this:
Me: “How can you not think JC is a great musician, he is a genius, his music is so unique and so much more interesting than most musicians that are popular. He’s objectively better than the people you listen to!”
Them: “I don’t think he is that good. He’s won quite a few awards but his music is never in the charts, surely if he was better then more people would listen to him?”
Me: “Hmm, so I don’t think that popularity is a good metric for the ‘goodness’ of a musician. I think that instead ‘goodness’ is some function of originality, influence on the trajectory of music people listen to …. and … yeah …. some other things.”
Them: “Yeah I can see why you might not think that popularity is tracking ‘goodness’ because it depends what we actually mean by ‘goodness’, let’s spend time chatting about what we think this ‘goodness’ thing actually means....”
I think this conversation was productive in thinking about why people listen to the types of music they listen to and what people mean when they say ‘good music’. We were able to move past the disagreement on whether JC is good or not and instead discuss the thing that would actually change our minds (what makes someone a ‘good’ musician). - ^
Think less ‘iron man’ and more ‘your friendly neighbourhood [librarian]’ if that is helpful
Hi Caleb, in the original EA Librarian post you wrote that people could submit questions to the service on the Forum using the EA Librarian tag. I asked one of those here and got a couple of responses, but it wasn’t clear to me if those were routed through the service or if they were just forum users who happened to see it. Suggestion: maybe when librarians answer a question on the forum they could identify themselves as such?
I’ve added your question to our system, but our turnaround time is a bit slow at the moment.
I’ll make sure that answers include
(EA Librarian)
At the start so you know it has been answered by the librarians.
Are questions asked via the tag going to be added to your system going forward or should people stop using that route?
People can still use the forum to submit questions.
Above I didn’t mean to suggest that I was reacting to Ian’s message and had taken the action to add the question to our system after seeing the message above. I had already added the question (and will continue to add any questions submitted on the forum), we are just running a bit slow at the moment.