I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing “person X is skeptical of MIRI” in the “cons” column) and this parent comment (“imagine I pointed a gun to your head and… offer you to give you additional information;” “never stopping at [person X thinks that p]”). I’m not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people’s top-line views on questions where there’s substantial disagreement, based on your overall assessment of that particular person’s credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.
If you are staking $5m on something, it’s hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is “opinions diverge on this but the people I think are smartest tend to believe p.” The reason I think this is usually bad is that (a) it’s actually impossible to know how much weight it’s rational to give someone else’s opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.
As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob’s “view” is much less positive than the rational aggregation of Bob and Carol’s.
It’s interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people’s beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don’t require as much communication / model-sharing to aggregate their results.
In fact, when hierarchical organizations do the other thing—”brute force” aggregate others’ beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that’s my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.
if you make a decision with large-scale and irreversible effects on the world (e.g. “who should get this $5M grant?”) I think it would usually be predictably worse for the world to ignore others’ views
Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. “person X doesn’t like MIRI” in the “cons” column of your spreadsheet seems foolish and wrongheaded.
Framing it as “taking others’ views into account” or “ignoring others’ views” is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.
Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.
I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:
Moral uncertainty, giving other moral systems weight “because other smart people believe them” rather than because they seem object-level reasonable
Lots of emphasis on avoiding accidentally doing harm by being uninformed
People bring up “intelligent people disagree with this” as a reason against something rather than going through the object-level arguments
Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it’s a recipe for information cascades, groupthink and herding.
In retrospect, it’s not surprising that this has ended up with numerous people being scarred and seriously demoralized by applying for massively oversubscribed EA jobs.
I guess it’s ironic that 80,000 Hours—one of the most frequent repeaters of the “don’t accidentally cause harm” meme—seems to have accidentally caused you quite a bit of harm with this advice (and/or its misinterpretations being repeated by others)!
I haven’t had the opportunity to see this play out over multiple years/companies, so I’m not super well-informed yet, but I think I should have called out this part of my original comment more:
Not to mention various high-impact roles at companies that don’t involve formal management at all.
If people think management is their only path to success then sure, you’ll end up with everyone trying to be good at management. But if instead of starting from “who fills the new manager role” you start from “how can <person X> have the most impact on the company”—with a menu of options/archetypes that lean on different skillsets—then you’re more likely to end up with people optimizing for the right thing, as best they know how.
I had a hard time answering this and I finally realized that I think it’s because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers’) jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.
Similarly, management is also not one-dimensional: different management roles need different skill sets which overlap with individual-contributor roles in different ways. Not to mention various high-impact roles at companies that don’t involve formal management at all. So I think my tl;dr answer would be “you should try to figure out how your current highest performers on various axes can have more leveraged impact on your company, which is often some flavor of management, but it depends a lot on the people and roles involved.”
For example, take engineering at Wave. Our teams are actually organized in such a way that most engineers are on a team led by (i.e. whose task queue is prioritized by) a product manager. Each engineer also has an engineering mentor who is responsible for giving them feedback, conducts 1:1s with them, contributes to their performance, etc.
Product managers don’t have to be technical at all, and some of the best ones aren’t, but some of the best engineers also move laterally into product management because the ways in which they are good engineers overlap a lot with that role. For engineering mentors, they usually need to be more technically skilled than their mentees, but they don’t necessarily have to be the best engineers in the company; skill at teaching and resonance with the role of mentor is more important.
We also have a “platform” team which works on engineer-facing tooling and infrastructure. Currently, I’m leading this team, but in the end state I expect it to have a more traditional engineering manager. For this person, some dimensions of engineering competence will be quite important, others won’t, and they’ll need extra skills that are not nearly as important to individual contributors (prioritization, communication, organization...). I expect they would probably be one of our “best performers” by some metrics, but not by others.
I’ll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:
We’ve found our bimonthly in-person “offsites” to be extremely important. For new hires, I often see their happiness and productivity increase a lot after their first retreat because it becomes easier and more fun for them to work with their coworkers.
Having the right cadence of standing meetings (1-on-1s, team meetings, retrospectives, etc.) becomes much more important since issues are less likely to surface in “hallway” conversations.
We try to make it really easy for people to upgrade conversations to video calls, both by frequently encouraging them to do so, and by making sure that every new hire has a “get to know you” call with as many coworkers as possible in their first few weeks.
(Your mileage may vary with these, of course! In particular, one relevant difference between Wave and other remote organizations is that I think Wave leans more heavily on “synchronous” calls relative to “asynchronous” Slack/email messages. This is important for us since 80%+ of us speak English as a third-plus language—it’s easier to clear up misunderstandings on a call!)
Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn’t worth it. Personally, I don’t think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it’s possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).
2. For personal work, it’s annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan’s article) was much worse than anywhere else I’ve been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it’s not terrible.
It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while I was in Senegal. But, those are mostly pathologically un-optimized blogs—e.g., their page weight was larger than the page-weight of the web-based IDE (Glitch) that I used to write the proxy.
3. Network latency has been a major bottleneck for our programming; for instance, we wrote a custom UDP-based transport layer protocol to speed up our app because TCP handshakes were too slow (I gave a talk on this if you’re curious). We also adopted GraphQL relatively early in part because it helped us reduce request/response sizes and number of roundtrips.
On the UX design side, a major obstacle is that many of our users aren’t particularly literate (let alone tech-literate). For instance, we often communicate with users via (in-app) voice recordings instead of the more traditional text announcements. More generally, it’s is a strong forcing function to keep our app simple so that the UI can be easily memorized and reading is as optional as possible. It also pushes us towards having more in-person touch points with our users—for instance, agents often help new users download the app and learn how to use it, and pre-COVID we had large teams of distributors who would go to busy markets and sign people up for the app in-person.
The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can’t share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We’re currently growing more quickly than most well-known fintech companies of similar sizes that I know of.
On EA providing for-profit funding: hard to say. Considerations against:
Wave looks like a very good investment by non-EA standards, so additional funding from EAs wouldn’t have affected our fundraising very much (not sure how much this generalizes to other companies)
At later stages, this is very capital-intensive, so probably wouldn’t make sense except as a thing for eg Open Phil to do with its endowment
Founding successful companies requires putting a lot of weight on inside-view considerations, a trait that’s not particularly compatible with typical EA epistemology. (Notably, Wave gets the most of this trait from Drew, the CEO, who, while value-aligned with EA, finds it hard to engage with standard EA-style reasoning for this reason.)
Considerations in favor:
Helps keep the company controlled by value-aligned people (not sure how important this is, I think the founders of Wave will end up retaining full control)
If the companies are good, it doesn’t actually cost anything except tying up capital for a while
Overall, I think it could make sense at early stages, where people matter more and metrics matter less (and capital goes further), but even at early stages there’s probably much more of a talent constraint than a funding constraint.
Cool! With the understanding that these aren’t your opinions, I’m going to engage with them anyway bc I think they’re interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.
For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we’d expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportunities that are similar in how promising they are
Despite the built-in incentives, I think “which companies get built” is still pretty contingent and random based on which people try to do things. For instance, it’s been obvious that M-Pesa had an amazing business in Kenya since ~2012, but it still hasn’t had equally successful copycats, let alone people trying to improve it, in other countries. If the market were really efficient here I think something like Wave would be 4+ years further along in its trajectory.
The specific cause areas that the EA movement currently sees as the most promising—including global poverty and health, animal welfare, and the longterm future—all serve recipients who (to different degrees) are incapable of significantly funding such work
Similarly, this is directionally correct but easy to overweight—there are still for-profit companies working in all of these spaces that seem likely to have very large impacts (Wave, Impossible Foods, Beyond Meat, SpaceX, OpenAI...)
For-profit organizations may produce incentives that make it unlikely to make the decisions that will end up producing enormous impact (in the EA sense of that term).
This is definitely a risk, and something that we worry about at Wave. That said:
In many cases, revenue/growth and impact are highly correlated. In the examples I can think of where they aren’t, it mostly involves monopolies doing anticompetitive or user-hostile things.
In the monopoly case, many monopolies seem to have wide freedom of action and are still controlled by founders (e.g. Google, Facebook) and their decisions are often driven as much by internal dynamics as external incentives. Uncertain here, but it seems likely that if these companies thought more like EA’s they would produce more impact.
Finally, I’ve also heard from several people the claim that today EA has an immense amount of funding, and if you’re a competent person founding a charity that works according to EA principles it is incredibly easy to get non-trivial amounts of funding
I think “nontrivial” for a nonprofit is trivial for a successful for-profit :) Wave has raised tens of millions of dollars in equity and hundreds of millions in debt, and we’re likely to raise 10x+ more in success cases. We definitely could not have raised nearly this much as a nonprofit. Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.
Hmm. This argument seems like it only works if there are no market failures (i.e. ideas where it’s possible to capture a decent fraction of the value created), and it seems like most nonprofits address some sort of market failure? (e.g. “people do not understand the benefits of vitamin-fortified food,” “vaccination has strong positive externalities”...)
I agree with most of what Lincoln said and would also plug Why and how to start a for-profit company serving emerging markets as material on this, if you haven’t read it yet :)
Can you elaborate on the “various reasons” that people argue for-profit entrepreneurship is less promising than nonprofit entrepreneurship or provide any pointers on reading material? I haven’t run across these arguments.
Great questions!
What are common failure cases/traps to avoid
I don’t know about “most common” as I think it varies by company, but the worst one for me was allowing myself to get distracted by problems that were more rewarding in the short term, but less important or leveraged. I wrote a bit about this in Attention is your scarcest resource.
How much should I be directly coding vs “architecting” vs process management
Related to the above, you should never be coding anything that’s even remotely urgent (because it’ll distract you too much from non-coding problems). For the first while, you should probably try not to code at all because learning how not to suck as a manager will be more than full-time. Later, it’s reasonable to work in “important but not urgent” stuff in your slack time, as long as you have the discipline not to get distracted by it.
Architecting vs process management depends on what your problems are, what kind of leader you want to be and what you can delegate to other people.
How do I approach hiring?
If you are hiring, hiring is your #1 priority and you should spend as much time and attention on it as is practical. Hiring better people has a magical way of solving many of your other problems.
Hiring can also be really demoralizing (because you are constantly rejecting people and/or being rejected), so it’s hard to have the conviction to put more effort into it until you’ve seen firsthand how much of a difference it makes.
For me, the biggest hiring improvement was getting our final interview to a point where I was quite confident that anyone who passed it would be a good engineer at Wave. This took many iterations, but lowering the risk of a bad hire meant that (a) I wasn’t distracted by stressing out about tricky hire/no-hire decisions, (b) we could indiscriminately put people through our hiring funnel and trust that the process would come to a reasonable verdict. After this change, our 10th-percentile hire has been about as good as our 50th-percentile hire previously, and we went from 4 engineers to 25 in a bit over a year.
I expect the exact same thing goes for investing in people once you’ve hired them, but I’m not as good at that yet so don’t have concrete advice.
Just generally, what would you have imparted on past-you?
You suck at hiring, get better.
If you’re worried that someone is sad about something (especially something you did), ask them!
Org structure matters a lot; friction, bad execution, etc. is often downstream of a bad division of responsibility between teams, teams having the wrong goals, etc. (Matters more once you are responsible for multiple teams)
Accept that you hate telling people what to do, and manage in such a way that you don’t have to. (Perhaps specific to me.)
Hiring.
Sorry for the minimalist website :) A couple clarifications:
We indeed split our businesses into Sendwave (international money transfer) and Wave (mobile money). Wave.com is the website for the latter.
The latter currently operates only in Senegal and Cote d’Ivoire (stay tuned though).
In addition to charging no fees for deposits or withdrawals, we charge a flat 1% to send. All in, I believe we’re about 80% cheaper than Orange Money for typical transaction sizes.
We don’t provide services to Orange—if you saw the logo on the website it’s just because we let our customers use their Wave balance to purchase Orange airtime.
For the focus of this concept, I am more concerned with providing Mobile Money from the most relevant and fair company available (whoever that is) to areas and people that so far did not have that service, rather than promoting movements from one company to the other which might be more efficient but will have a much smaller effect in poverty reduction.
This is our goal as well; to quote myself in another comment:
Despite the fact that M-Pesa started in 2008, mobile money in most other countries in sub-Saharan Africa is kind of crap by comparison (much more expensive, worse service, smaller agent network, etc.) because most telecoms have not even been able to copycat M-Pesa effectively. By executing better, you can speed up the adoption of mobile money.
Even Orange (which is fairly widespread in Senegal) has only gotten 25% of their own userbase onto mobile money (source) because they, like most mobile money systems, are executing really badly compared to what’s possible. There is a lot of room to make mobile money more accessible even in countries with already-existing mobile money. (Which at this point is nearly all countries AFAIK—it’s easy for a telecom to buy an off the shelf mobile money service from something like Ericsson or Huawei—much harder for them to actually execute well on rolling it out.)
Hey Marc, cool that you’re thinking about this!
I work for Wave, we build mobile money systems in Senegal, Cote d’Ivoire, and hopefully soon other countries. Here are some thoughts on these interventions based on Wave’s experience:
Interventions 1-2 (creating accounts): I think for most people that don’t use mobile money, in countries where mobile money is available, “not having an account” is not the main blocker. It’s more likely to be something like
They don’t live near enough to an agent
Mobile money charges fees that is too high given the typical amounts the person wants to send
They don’t trust the service
They don’t trust the agent they live near
They can’t read, so it’s hard for them to use the app (they would have to memorize the UI)
Intervention 3 (lower restrictions): In countries that don’t have a way of using mobile money without an ID, that’s an extremely valuable thing to advocate for. Also, at least for Wave, our users constantly ask for higher transaction limits than the central bank allows us to give them. Both of these policies are probably at least somewhat based on FUD spread by established players (banks?) that don’t want mobile money to succeed. However, you’re probably right that mobile money companies already have the best incentive to accomplish this change; it’s also hard to get the ear of a central bank as a random foreigner. But there may be something interesting in this space.
Intervention 4 (accounts for ID-less people): this is interesting, although I believe that at least in WAEMU, it’s already possible to use mobile money without an ID with low transaction limits (you can receive at most ~$400/mo). Still, a lot of people want to send/receive more than that, and helping people with paperwork to get a replacement ID is likely to be very helpful in other ways too :)
Intervention 5 (starting agencies): In Wave’s experience, better access to agents is the #1 driver of mobile money growth (at least until a system is so big it hits geographic saturation here). Most mobile money systems also end up working with third-party providers of agent services because they don’t have the organizational capacity to manage a huge number of agents themselves. There’s probably room for an org that’s a third-party agent network focused on the poorest areas in a given country which would otherwise be last on the mobile money system’s priority list for expansion.
Intervention 6 (more research): We’ve found good research on other mobile money systems to be hard to come by, but incredibly useful, even just basics like “here is how M-Pesa expanded over time” or “here are some statistics on ZAAD” (these help us a lot with our own expansion strategy). Although the type of research we want is probably somewhat different from the type of research that would be most useful to other consumers of mobile money research.
I would also add another:
Intervention 7—build a better mobile money system:
Despite the fact that M-Pesa started in 2008, mobile money in most other countries in sub-Saharan Africa is kind of crap by comparison (much more expensive, worse service, smaller agent network, etc.) because most telecoms have not even been able to copycat M-Pesa effectively. By executing better, you can speed up the adoption of mobile money.
Mobile money systems have network effects, meaning that it is somewhat path-dependent which one “wins the market” in a country. Most current mobile money systems that win are the ones offered by monopoly telecoms, so they end up both charging a lot themselves, and also entrenching the telecom’s monopoly. If you were to, say, start an EA mobile money system that wasn’t telco-affiliated, and preferred to lower prices rather than raise them at scale, you could generate a lot more surplus.
If anyone is excited about that, Wave is hiring for many roles, especially engineers—you can contact me here or at ben@wave.com :)
Some of your “conservative” parameter estimates are surprising to me.
For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.
You also wrote
we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing
but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.
I’m looking forward to CEA having a great 2020 under hopefully much more stable and certain leadership!
I’d welcome feedback on these plans via this form or in the comments, especially if you think there’s something that we’re missing or could be doing better.
This is weakly held since I don’t have any context on what’s going on internally with CEA right now.
That said: of the items listed in your summary of goals, it looks like about 80% of them involve inward-facing initiatives (hiring, spinoffs, process improvements, strategy), and 20% (3.3, 4.1-5) involve achieving concrete outcomes that affect things outside of CEA. The report on progress from last year also emphasized internal process improvements rather than external outcomes.
Of course, it makes sense that after a period of rapid leadership churn, it’s necessary to devote some time to rebuilding and improving the organization. And if you don’t have a strategy yet, I suppose it makes sense to put “develop a strategy” as your top goal and not to have very many other concrete action items.
As a bystander, though, I’ll be way more excited to read about whatever you end up deciding your strategy is, than about the management improvements that currently seems to be absorbing the bulk of CEA’s focus.
Hmm. You’re betting based on whether the fatalities exceed the mean of Justin’s implied prior, but the prior is really heavy-tailed, so it’s not actually clear that your bet is positive EV for him. (e.g., “1:1 odds that you’re off by an order of magnitude” would be a terrible bet for Justion because he has 2⁄3 credence that there will be no pandemic at all).
Justin’s credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attachment process. If (roughly, I think) the median of this distribution is 1⁄10 of the mean, then this bet is negative EV for Justin despite seeming generous.
In the future you could avoid this trickiness by writing a contract whose payoff is proportional to the number of deaths, rather than binary :)
Looks like if this doesn’t work out, I should at least update my surname...