I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.
Note that billzito didn’t specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.
People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it’s more likely to be because they’re over-hiring rather than because they actually need that many people, in which case:
You’ll be working on less important problems that are more likely to be “fake” or busywork
There will be less of a forcing function for you to be very good at your job (because it will be less company-threatening if you aren’t)
There will be less of a forcing function for you to prioritize correctly (again because nothing super bad will happen if you work on the wrong thing)
You’re more likely to experience a lot of politics and internal misalignment in the org
(I’m not saying these applied to you specifically, just that they’re generally more common at companies that are growing less quickly. Of course, they also happen at some fast-growing companies that grow headcount too quickly!)
It sounds like you interpreted me as saying that rejecting resumes without feedback doesn’t make people sad. I’m not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I’m speaking from experience here).
However, my main point is that providing feedback on resume applications is much more costly to the organization, not that it’s less beneficial to the recipients. For example, someone might feel like they didn’t get a fair chance either way, but if they get concrete feedback they’re much more likely to argue with the org about it.
I’m not saying this means that most people don’t deserve feedback or something—just that when an org gets 100+ applicants for every position, they’re statistically going to have to deal with lots people who are in the 95th-plus percentile of “acting in ways that consume lots of time/attention when rejected,” and that can disincentivize them from engaging more than they have to.
Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume.
I’m a bit confused about the phrasing here because it seems to imply that “Alice’s application is read by a human” and “if Alice is rejected it’s not just because of her resume” are equivalent, but many resume screen processes (including eg Wave’s) involve humans reading all resumes and then rejecting people (just) because of them.
I’m unfamiliar with EA orgs’ interview processes, so I’m not sure whether you’re talking about lack of feedback when someone fails an interview, or when someone’s application is rejected before doing any interviews. It’s really important to differentiate these because because providing feedback on someone’s initial application is a massively harder problem:
There are many more applicants (Wave rejects over 50% of applications without speaking to them and this is based on a relatively loose filter)
Candidates haven’t interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale
The relative cost of rejecting with vs. without feedback is higher (rejecting without feedback takes seconds, rejecting with feedback takes minutes = ~10x longer)
Candidates are more likely to feel that the rejection didn’t give them a fair chance (because they feel that they’d do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection
I feel pretty strongly that if people go through actual interviews they deserve feedback, because it’s a relatively low additional time cost at that point. At the resume screen step, I think the trade-off is less obvious.
I don’t have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.
IMO, giving insufficient positive feedback is a common, and damaging, blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it’s mostly good.
People use feedback not just to determine what to improve at, but also as an overall assessment of whether they’re doing a good job. If you only give negative feedback, you’re effectively biasing this process towards people inferring that they’re doing a bad job. You can try to fight it by explicitly saying “you’re doing a good job” or something, but in my experience this doesn’t really land on an emotional level.
Positive feedback in the form “you are good at X, do more of it” can also be an extremely useful type of feedback! Helping people lean into their strengths more often yields as much or more improvement as helping them shore up their weaknesses.
I’m not particularly good at this myself, but every time I’ve improved at it I’ve had multiple reports say things to the effect of “hey, I noticed you improved at this and it’s awesome and very helpful.”
That said, I agree with you that shit sandwiches are silly and make it obvious that the positive feedback isn’t organic, so they usually backfire. The correct way to give positive feedback is to resist your default to be negatively biased by calling out specific things that are good when you see them.
Looks like if this doesn’t work out, I should at least update my surname...
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing “person X is skeptical of MIRI” in the “cons” column) and this parent comment (“imagine I pointed a gun to your head and… offer you to give you additional information;” “never stopping at [person X thinks that p]”). I’m not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people’s top-line views on questions where there’s substantial disagreement, based on your overall assessment of that particular person’s credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.
If you are staking $5m on something, it’s hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is “opinions diverge on this but the people I think are smartest tend to believe p.” The reason I think this is usually bad is that (a) it’s actually impossible to know how much weight it’s rational to give someone else’s opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.
As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob’s “view” is much less positive than the rational aggregation of Bob and Carol’s.
It’s interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people’s beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don’t require as much communication / model-sharing to aggregate their results.
In fact, when hierarchical organizations do the other thing—”brute force” aggregate others’ beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that’s my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.
if you make a decision with large-scale and irreversible effects on the world (e.g. “who should get this $5M grant?”) I think it would usually be predictably worse for the world to ignore others’ views
Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. “person X doesn’t like MIRI” in the “cons” column of your spreadsheet seems foolish and wrongheaded.
Framing it as “taking others’ views into account” or “ignoring others’ views” is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.
Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.
I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:
Moral uncertainty, giving other moral systems weight “because other smart people believe them” rather than because they seem object-level reasonable
Lots of emphasis on avoiding accidentally doing harm by being uninformed
People bring up “intelligent people disagree with this” as a reason against something rather than going through the object-level arguments
Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it’s a recipe for information cascades, groupthink and herding.
In retrospect, it’s not surprising that this has ended up with numerous people being scarred and seriously demoralized by applying for massively oversubscribed EA jobs.
I guess it’s ironic that 80,000 Hours—one of the most frequent repeaters of the “don’t accidentally cause harm” meme—seems to have accidentally caused you quite a bit of harm with this advice (and/or its misinterpretations being repeated by others)!
I haven’t had the opportunity to see this play out over multiple years/companies, so I’m not super well-informed yet, but I think I should have called out this part of my original comment more:
Not to mention various high-impact roles at companies that don’t involve formal management at all.
If people think management is their only path to success then sure, you’ll end up with everyone trying to be good at management. But if instead of starting from “who fills the new manager role” you start from “how can <person X> have the most impact on the company”—with a menu of options/archetypes that lean on different skillsets—then you’re more likely to end up with people optimizing for the right thing, as best they know how.
I had a hard time answering this and I finally realized that I think it’s because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers’) jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.
Similarly, management is also not one-dimensional: different management roles need different skill sets which overlap with individual-contributor roles in different ways. Not to mention various high-impact roles at companies that don’t involve formal management at all. So I think my tl;dr answer would be “you should try to figure out how your current highest performers on various axes can have more leveraged impact on your company, which is often some flavor of management, but it depends a lot on the people and roles involved.”
For example, take engineering at Wave. Our teams are actually organized in such a way that most engineers are on a team led by (i.e. whose task queue is prioritized by) a product manager. Each engineer also has an engineering mentor who is responsible for giving them feedback, conducts 1:1s with them, contributes to their performance, etc.
Product managers don’t have to be technical at all, and some of the best ones aren’t, but some of the best engineers also move laterally into product management because the ways in which they are good engineers overlap a lot with that role. For engineering mentors, they usually need to be more technically skilled than their mentees, but they don’t necessarily have to be the best engineers in the company; skill at teaching and resonance with the role of mentor is more important.
We also have a “platform” team which works on engineer-facing tooling and infrastructure. Currently, I’m leading this team, but in the end state I expect it to have a more traditional engineering manager. For this person, some dimensions of engineering competence will be quite important, others won’t, and they’ll need extra skills that are not nearly as important to individual contributors (prioritization, communication, organization...). I expect they would probably be one of our “best performers” by some metrics, but not by others.
I’ll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:
We’ve found our bimonthly in-person “offsites” to be extremely important. For new hires, I often see their happiness and productivity increase a lot after their first retreat because it becomes easier and more fun for them to work with their coworkers.
Having the right cadence of standing meetings (1-on-1s, team meetings, retrospectives, etc.) becomes much more important since issues are less likely to surface in “hallway” conversations.
We try to make it really easy for people to upgrade conversations to video calls, both by frequently encouraging them to do so, and by making sure that every new hire has a “get to know you” call with as many coworkers as possible in their first few weeks.
(Your mileage may vary with these, of course! In particular, one relevant difference between Wave and other remote organizations is that I think Wave leans more heavily on “synchronous” calls relative to “asynchronous” Slack/email messages. This is important for us since 80%+ of us speak English as a third-plus language—it’s easier to clear up misunderstandings on a call!)
Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn’t worth it. Personally, I don’t think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it’s possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).
2. For personal work, it’s annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan’s article) was much worse than anywhere else I’ve been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it’s not terrible.
It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while I was in Senegal. But, those are mostly pathologically un-optimized blogs—e.g., their page weight was larger than the page-weight of the web-based IDE (Glitch) that I used to write the proxy.
3. Network latency has been a major bottleneck for our programming; for instance, we wrote a custom UDP-based transport layer protocol to speed up our app because TCP handshakes were too slow (I gave a talk on this if you’re curious). We also adopted GraphQL relatively early in part because it helped us reduce request/response sizes and number of roundtrips.
On the UX design side, a major obstacle is that many of our users aren’t particularly literate (let alone tech-literate). For instance, we often communicate with users via (in-app) voice recordings instead of the more traditional text announcements. More generally, it’s is a strong forcing function to keep our app simple so that the UI can be easily memorized and reading is as optional as possible. It also pushes us towards having more in-person touch points with our users—for instance, agents often help new users download the app and learn how to use it, and pre-COVID we had large teams of distributors who would go to busy markets and sign people up for the app in-person.
The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can’t share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We’re currently growing more quickly than most well-known fintech companies of similar sizes that I know of.
On EA providing for-profit funding: hard to say. Considerations against:
Wave looks like a very good investment by non-EA standards, so additional funding from EAs wouldn’t have affected our fundraising very much (not sure how much this generalizes to other companies)
At later stages, this is very capital-intensive, so probably wouldn’t make sense except as a thing for eg Open Phil to do with its endowment
Founding successful companies requires putting a lot of weight on inside-view considerations, a trait that’s not particularly compatible with typical EA epistemology. (Notably, Wave gets the most of this trait from Drew, the CEO, who, while value-aligned with EA, finds it hard to engage with standard EA-style reasoning for this reason.)
Considerations in favor:
Helps keep the company controlled by value-aligned people (not sure how important this is, I think the founders of Wave will end up retaining full control)
If the companies are good, it doesn’t actually cost anything except tying up capital for a while
Overall, I think it could make sense at early stages, where people matter more and metrics matter less (and capital goes further), but even at early stages there’s probably much more of a talent constraint than a funding constraint.
Cool! With the understanding that these aren’t your opinions, I’m going to engage with them anyway bc I think they’re interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.
For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we’d expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportunities that are similar in how promising they are
Despite the built-in incentives, I think “which companies get built” is still pretty contingent and random based on which people try to do things. For instance, it’s been obvious that M-Pesa had an amazing business in Kenya since ~2012, but it still hasn’t had equally successful copycats, let alone people trying to improve it, in other countries. If the market were really efficient here I think something like Wave would be 4+ years further along in its trajectory.
The specific cause areas that the EA movement currently sees as the most promising—including global poverty and health, animal welfare, and the longterm future—all serve recipients who (to different degrees) are incapable of significantly funding such work
Similarly, this is directionally correct but easy to overweight—there are still for-profit companies working in all of these spaces that seem likely to have very large impacts (Wave, Impossible Foods, Beyond Meat, SpaceX, OpenAI...)
For-profit organizations may produce incentives that make it unlikely to make the decisions that will end up producing enormous impact (in the EA sense of that term).
This is definitely a risk, and something that we worry about at Wave. That said:
In many cases, revenue/growth and impact are highly correlated. In the examples I can think of where they aren’t, it mostly involves monopolies doing anticompetitive or user-hostile things.
In the monopoly case, many monopolies seem to have wide freedom of action and are still controlled by founders (e.g. Google, Facebook) and their decisions are often driven as much by internal dynamics as external incentives. Uncertain here, but it seems likely that if these companies thought more like EA’s they would produce more impact.
Finally, I’ve also heard from several people the claim that today EA has an immense amount of funding, and if you’re a competent person founding a charity that works according to EA principles it is incredibly easy to get non-trivial amounts of funding
I think “nontrivial” for a nonprofit is trivial for a successful for-profit :) Wave has raised tens of millions of dollars in equity and hundreds of millions in debt, and we’re likely to raise 10x+ more in success cases. We definitely could not have raised nearly this much as a nonprofit. Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.
Interesting. It sounds like you’re saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn’t realize that.
In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.