Co-founder and CEO of Probably Good.
Also a co-founder of EA Israel.
Co-founder and CEO of Probably Good.
Also a co-founder of EA Israel.
Thank you!
This viewpoint it really helpful. It seems relatively easy to look at a specific article and figure out who it might be useful for, but creating a generic way to organize articles that would work for most people is quite a bit harder.
And I agree that concreteness is definitely something we should be explicitly thinking about when creating content and organizing it.
And I agree regarding both downsides \ risks. They’re definitely something to think about. The first might mean that this is something that might come later if we don’t find a relatively simple way of doing this.
The second can probably be mitigated to a large extent if some effort but requires more thinking in any case. We’ve discussed this in related contexts (making sure we don’t counterfactually cause readers not to engage with other existing quality content), but not in this context.
This is something we discussed at length and are still thinking about.
As you write in the end, the usual “I’ll experiment and see” is true, but we have some more specific thoughts as well:
I think there’s a meaningful difference between someone who uses “shoddy” methodology to someone who’s thoughtfully trying to figure out the best course of action and has either not got there yet or still didn’t overcome some bad priors or biases. While I’m sure there are some edge cases, I think most cases aren’t on the edge.
I think most of our decisions are easier in practice than in theory. The content we’ll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt we’ll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesn’t mean that we don’t have a lot to think, talk through and figure out regarding what we mean by ‘Agnostic EA’. That’s something that we haven’t stopped discussing since the idea for this came up and I don’t think we’ll stop any time soon.
This is the risk we were most worried about regarding the name. It does set a relatively light tone. We decided to go with it anyway for two reasons:
The first is that the people we talked to said that it sounds interesting and interested them more than the responses we got for more regular, descriptive names.
The second is that our general tone in writing is more serious. Serious enough that we’re working hard to make sure that it isn’t boring for some people who don’t like reading huge walls of dense text. We figure it’s best to err on the other side in this case.
I think we agree on more than we disagree :-)
I was thinking of two main things when I said there aren’t many ways to reduce people’s expectation of certainty.
The first, as you mentioned, is 80k’s experience that this is something where claiming it (clearly and repeatedly) didn’t have the desired outcome.
The second, is through my own experience, both in giving career advice and in other areas where I did consultation-type work. My impression was (and again, this is far from strong evidence) that (1) this is hard to do and (2) it gets harder if you don’t do it immediately at the beginning. So for example, when I do 1:1s—that’s something I go into when setting expectations in the first few minutes. When I didn’t, that was very hard to correct after 30 minutes. This is one of the reasons that I think having this prominent (doesn’t have to be the name, could be in the tagline \ etc.) could be helpful.
Your later points seem to indicate something which I also agree with: That naming isn’t super important. I think there are specific pitfalls that can be seriously harmful, but besides that—I don’t expect the org name to have a very large effect by itself one way or another.
For the sake of clarity I’ll restate what I think you meant:
We’re not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
We’re talking specifically about “membership” in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something else—would now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, I’ll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. While we’re still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), we’re not broadening our values or expectations to areas that are well outside the EA community. I don’t currently see a situation where we give advice or a recommendation that isn’t in line with the community in general. It’s worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80K’s current focus. We see our work as matching that broader scope rather than expanding it, and so we don’t believe we’re changing where EA stands on this spectrum—simply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldn’t stand behind—our methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I can’t promise we won’t reach different conclusions sometimes, but I won’t be “accepting” of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but it’s important and it mitigates a lot of our risks: In the next few months, we’re not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. That’s explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once we’re more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - we’re risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
Thanks!
The intended meaning was that EA materials directed at this need specifically don’t exist. But I think you’re correct and that this wasn’t clear. I also like your version better, so will be updating the doc accordingly. Thank you!
Thank you! Both for the thoughts and for the separation into different comments. It is much easier to keep track of everything and is appreciated :-)
The guide we’re working on is indeed similar in some aspects to 80k’s old guide.
We’re still working on it (and are at relatively early stages) so none of this is very certain but I expect that:
* The guide will differ in our framework for thinking about it (so things like the thought process and steps you go through to make a decision).
* I expect the guide will differ on some specific areas where we are more agnostic than 80k, but won’t differ on most.
* Specifically, 80k have updated their 2017 guide to focus on longtermism more than it originally did. That would be a specific area where we will differ.
* Quite sure we pretty much agree on a lot of the meta-considerations (things like “it’s a very important decision and is worth a lot of consideration” or “for most people, making an effort to expand the scope of the search is worthwhile”).
Regarding why we’re doing this: Even if this was 80k’s current guide, I’d think a second general viewpoint on how to approach career decisions would be valuable to a lot of people. Given that 80k consider this their “older” guide, I really think it would be helpful to have another one.
Also (and probably more importantly), a general guide is just a really useful way to put the most important general information that we think is necessary in one place. There’s a lot of things we think are important that fit very well into this format.
That’s really interesting! There are probably quite a few different formats to do this sort of thing (one on ones with people facing the same dilemmas \ people that have faced it recently, bringing together groups of people who have similar situations, etc.)
I think some local groups are doing things like this, but it’s definitely something we should think about as an option that can potentially be relatively low effort and (hopefully) high impact.
First of all, thank you for the feedback! It’s not always easy to solicit quality (and very thoroughly justified) feedback, so I really appreciate it.
Before diving into the specifics, I’ll say that on the one hand—the name could definitely change if we keep getting feedback that it’s suboptimal. That could be in a week or in a year or two, so the name isn’t final in that sense.
On the other hand, we did run this name quite a few people (including some who aren’t familiar with EA). We tried (to the best of our ability) to receive honest feedback (like not telling people that this is something we’re setting up or letting someone else solicit the feedback). Most of what you wrote came up, but rarely. And people seemed to feel positively about it. It’s definitely possible that the feedback we got on it was still skewed positive, but it was much better for this name than for other options we tried.
Now, to dive into the specifics and my thoughts on them:
* The name doesn’t make the function clear: I think this is a stylistic preference. I prefer having a name that’s more memorable, when the function can be explained in a sentence or two right after it. I know the current norm for EA is to name orgs by stating their function in 2 or 3 words, but I think the vast majority of orgs (for profit and non-profit) choose a name that doesn’t just plainly state what the org does. I will mention that, depending on context, what might appear is “Probably Good Career Advice”, which is clearer (though still doesn’t fully optimize for clarity).
* Good can mean quality and morality: Again, I liked that. We do mean it in both ways (the advice is both attempting to be as high quality as possibly and as high as possible in moral impact, but we are working under uncertainty in both parameters).
* Turning people off by giving the message that the product isn’t good or that we’re not ambitious in making it good: I pretty much fully agree with you on the analysis. I think this name reduces the risk of people expecting a level of certainty that we’ll never reach (and is very commonly marketed in non-EA career advice) and increases the risk of people initially being turned off by perceived low quality or low effort.
I also like and agree with your “pitch” and that is more of less how I’m thinking about the issue.
Two relevant points on weighing this trade-off:
1. Currently, I’m more worried about setting too high expectations than the perception of low quality. Both because I think we can potentially cause more harm (people following advice with less thought than needed) and because I think there are other ways to signal high quality and very few ways to (effectively) lower people’s perceived certainty in our advice.
2. Most people we ran the name by did catch on that the name was a little tongue-in-cheek in it’s phrasing. This wasn’t everyone, but the people who did see that—didn’t think there was a signal of lower quality.
I do agree there’s a risk there, but I see it as relatively small, especially if I’m assuming that most people will reach us through channels where they have supposedly heard something about us and aren’t only aware of the name.
To summarize my thoughts:
I don’t think it’s a perfect name.
I like that it’s a memorable phrase rather than a bland statement of what we do. I like that it’s a little tongue-in-cheek and that it does a few things at the same time (two meaning or good, alluding the the uncertainty). I like that it put our uncertainty front and center.
I agree there’s a risk of signaling low quality \ effort and that all of the things that I like could also be a net harm if I’m wrong (which isn’t specifically unlikely).
We’ll collect more feedback on the name and we’ll change if it doesn’t look good.
Thank you for writing this!
I think your analysis can be specifically useful for people who want to contribute and feel like they’re not sure where to look for neglected areas in EA.
I’ll add a small comment regarding “It is difficult to compete with the existing organisations that are just not quite doing this”:
My experience with orgs in the EA community is that pretty much everyone is incredibly cooperative and genuinely happy to see others fill in the gaps that they’re leaving.
I’ve been in talks with 80,000 hours and a few other orgs about an initiative in the careers space for a while now. Everyone we’ve talked to was both open about what they’re doing (and what they aren’t doing) and ridiculously helpful with advice and support.
I think if someone is serious about trying to fill a gap in the EA body of work: It’s important to understand from adjacent orgs how big \ real this gap is and if they have comments about your approach to it. And while I can see why someone would be worried, I think if you approach with the right attitude, the ‘competition’ would have far more benefits than harms.
Thanks for writing this! It’s always useful to get reminders for the sort of mistakes we can fail to notice even if when they’re significant.
I also think it would be a lot more helpful to walk through how this mistake could happen in some real scenarios in the context of EA (even though these scenarios would naturally be less clear-cut and more complex).
Lastly, it might be worth noting the many other tools we have to represent random variables. Some options off the top of my head:
* Expectation & variance: Sometimes useful for normal distributions and other intuitive distributions (eg QALY per $ for many interventions at scale).
* Confidence intervals: Useful for many cases where the result is likely to be in a specific range (eg effect size for a specific treatment).
* Probabilities for specific outcomes or events: Sometimes useful for distributions with important anomalies (eg impact of a new organization), or when looking for specific combinations of multiple distributions (eg the probability that AGI is coming soon and also that current alignment research is useful).
* Full model of the distribution: Sometimes useful for simple \ common distributions (all the examples that come to mind aren’t in the context of EA, oh well).
One small note: The examples are there to make the category clearer. These aren’t all cases where expected value is wrong \ inappropriate to use. Specifically, for some of them, I think using expected value works great.
Wow! This is really good!
I think the general advice is great, and I really appreciate your candidness: Revealing the data and the materials you used, as well as the level of detail regarding your process.
This isn’t something that is usually written and I’m sure it’ll help a lot of people facing hiring challenges for EA orgs...
I can add a little about my own experience and process regarding rejection (which I agree is one of the hardest parts):
1. I try to honestly explain to candidates why they were rejected (usually by mail, sometimes by phone). This is usually possible for almost any candidate who has had an interview that wasn’t very short (with the exception of a few candidates that I have a very strong impression that they don’t want to hear it). Specifically, if possible, I try to answer the question of “What would need to change for you to be accepted in a year”. I started out very nervous about how candidates will receive it and have been surprised at how much it’s appreciated.
2. I really agree with what you wrote about not being a jerk, and that timely answers are an important part of it. This is especially true for rejection, partly because it’s easy for us to procrastinate making a decision when that decision is uncomfortable.
3. I think it’s important to make everything is worded precisely and clearly and leaves no room for misinterpretation. Be careful not to give false hope that you might still reconsider (if that’s not true), don’t write something that might be interpreted as hinting at some hidden reasons for the rejection, etc. This isn’t the place for writing with style. It should be optimized for conciseness and clarity. This is also why I usually send rejections by email, rather than phone. Phone calls are more personal but I can look over what I write in an email and make sure it says exactly what I mean.
4. In cases where I have a good impression of a candidate but there isn’t a fit, I offer to intro them to people I know who are also hiring for similar roles at other orgs\companies. It’s a good way of helping everyone involved and shows that I really do believe they can be great for other roles\orgs.
Sorry, I wasn’t very clear on the first point: There isn’t a ‘correct’ prior.
In our context (by context I mean both the small number of observations and the implicit hypotheses that we’re trying to differentiate between), the prior has a large enough weight that it affects the eventual result in a way that makes the method unhelpful.
Thank you for writing this!
I really appreciate your approach of thoroughly going through potential issues with your eventual conclusion. It’s a really good way of getting to the interesting parts of the discussion!
The area where I’m left least convinced by is the use of Laplace’s Law of Succession (LLoC) to suggest that AGI is coming soonish (that isn’t to say there aren’t convincing arguments for this, but I think this argument probably isn’t one of them).
There are two ways of thinking that make me skeptical of using LLoC in this context (they’re related but I think it’s helpful to separate them):
1. Given a small amount of observations, there’s not enough information to “get away” from our priors. So whatever prior we load into the formula—we’re bound to get something relatively close to it. This works if we have a good reason to use a uniform prior or in contexts where we’re only trying to separate hypotheses that aren’t “far enough away” from the uniform prior, which I don’t think is the case here:
In my understanding, what we’re really trying to do is separate two hypotheses: The first is that the chance of AGI appearing in the next 50 years is non-negligible (it won’t make a huge difference to our eventual decision making if it’s 40% or 30% or 20%). The second is that it is negligible (let’s say, less than 0.1%, or one in a thousand).
When we use a uniform prior (which starts out with a 50% chance of AGI appearing within a year) - we have already loaded the formula with the answer and the method isn’t helpful to us.
2. In continuation to the “demon objection” within the text, I think the objection there could be strengthened to become a lot more convincing. The objection is that LLoC doesn’t take the specific event it’s trying to predict into account, which is strange and sounds problematic. The example given turns out ok: We’ve been trying to summon demons for thousands of years so the chance of it happening in the next 50 years is calculated to be small.
But of course, that’s just not the best example to show that LLoC is problematic in these areas:
Example 1: I have thought up of a completely new and original demon. It was obviously never attempted to summon my new and special demon until this year, when, apparently it wasn’t summoned. The LLoC chance of summoning my demon next year is quite high (and over the next 50 years is incredibly high). It’s also larger than the chance of summoning any demon (including my own) over those time periods.
The problematic nature of it isn’t just because I picked an extreme example with a single observation -
Example 2: What is the chance that the movie Psycho is meant to hypnotize everyone watching it and we’ll only realize it when Hitchcock takes over the world? Well, turns out that this hasn’t yet happened for exactly 60 years. So, it seems like the chance of this happening soon is precisely the same as the chance of AGI appearing.
Next, what is the chance of Hitchcock doing this AND Harper Lee (To Kill a Mockingbird came out in the same year) attempts doing this in a similar fashion AND Andre Cassagnes (Etch-A-Sketch is also from 1960) does so (I want to know the chance of all three happening at the exact same time)? Turns out that this specific and convoluted scenario is just as likely since it could only start happening at 1960… This is both obviously wrong and an instance of the conjunction fallacy.
That sounds really cool!
I’ll be happy to join! :-)