Hmm. You’re betting based on whether the fatalities exceed the mean of Justin’s implied prior, but the prior is really heavy-tailed, so it’s not actually clear that your bet is positive EV for him. (e.g., “1:1 odds that you’re off by an order of magnitude” would be a terrible bet for Justion because he has 2⁄3 credence that there will be no pandemic at all).
Justin’s credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attachment process. If (roughly, I think) the median of this distribution is 1⁄10 of the mean, then this bet is negative EV for Justin despite seeming generous.
In the future you could avoid this trickiness by writing a contract whose payoff is proportional to the number of deaths, rather than binary :)
Oops. I searched for the title of the link before posting, but didn’t read the titles carefully enough to find duplicates that edited the title. Should have put more weight on my prior that this would already have been posted :)
I’m guessing that they assumed we were exaggerating the numbers in order to make them more interested in working with us. The fact that you’re so ready to call anyone who lies about user numbers a “scammer” may itself be part of the cultural difference here :)
Examples (mostly from Senegal since that’s where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):
Most Senegalese companies seem to place a much stronger emphasis on bureaucracy and paperwork.
When interacting with potential business partners in East Africa, we eventually realized that when we told them our user/transaction numbers, they often assumed that we were lying unless the claim was endorsed by someone they had a trusted connection to.
In the US, we have fully transparent salaries (everyone at the company can look up anyone else’s salary in a spreadsheet). We weren’t able to extend this norm to our Senegalese subsidiary because it caused too much interpersonal conflict. (This was at least partly the result of us not putting enough investment into making the salary scale work for everyone, but my understanding is that my Senegalese coworkers were pessimistic about bringing back salary transparency even if we fixed that.)
In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I’ve had a few colleagues who I would ask yes-or-no questions and they would answer “Yes” followed by an explanation of why the answer is no.)
Exporting different norms is quite hard at scale. You need to hire people who are the closest to the norms that you want, but they’ll still probably be fare away so you’ll also have to invest a lot in propagating the norms you want, which only really works well 1-on-1. When we needed to scale our local Senegal team quickly we ended up having to compromise on some norms to do so (e.g. salary transparency, amount of paperwork).
Broadly agree, but:
You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don’t even have access to basic needs.
Can’t you just provide people basic needs then though? Many of Wave’s clients have no smartphone and can’t read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously didn’t have smartphones. Providing people cell service is hard (if you’re not a telecom), but if an area has cell service but no internet you can still make useful information products with USSD, SMS, etc., or physical shops.
(I do think that many good startup ideas in the developing world involve providing relatively “basic” needs! But it seems to me like there’s decent opportunity there.)
Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!
Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I’m not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one). The same general idea applies to need for talent: there are a relatively small number of tasks that stand out as unusually in need of more talent.
The “one charity” argument is only true on the margin. It would be incorrect to conclude from this that nobody should start additional charities—for instance, even though GiveWell’s current highest-priority gap is AMF, I’m still glad that Malaria Consortium exists so that it could absorb $25m from them earlier this year. Similarly, it’s incorrect to conclude from this style of argument that the social returns to talent should be concentrated in specific fields. While there may be a small number of “most important tasks” on the margin, the EA community is now big enough that we might expect to see margins changing over time.
Also, the majority of people who are earning to give would probably be able to fund less than one person doing direct work. If your direct work would be mostly non-replaceable, then this compares unfavorably to direct work. (Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.)
If you’re really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?
I haven’t actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.
Whoops, sorry about the quotes—I was writing quickly and intended them to denote that I was using “solve” in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.
These theoretical claims seem quite weak/incomplete.
In practice, autocrats’ time horizons are highly finite, so I don’t think a theoretical mutual-cooperation equilibrium is very relevant. (At minimum, the autocrat will eventually die.)
All your suggestions about oligarchy improving the tyranny of the majority / collective action problems only apply to actions that are in the oligarchy’s interests. You haven’t made any case that the important instances of these problems are in an oligarchy’s interests to solve, and it doesn’t seem likely to me.
What’s the shift you think it would imply in animal advocacy?
I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!
Yikes; this is pretty concerning data. Great find!
I’d be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their “realistic calculation” of their cost effectiveness, which assumes 5% annualized attrition. (That’s not an apples to apples comparison, so their estimate isn’t necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)
I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I’d be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.
For instance, a lot of today’s fiction seems cynical and pessimistic about human nature; the characters frequently don’t seem to have goals related to anything other than their immediate social environment; and they often don’t pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership
This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I’m very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?
(PS: if you’re interested in posting but unsure about content, I’d be excited to help answer any q’s or read a draft! My email is in my profile.)
What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that’s not a strong argument against not doing it right now. You can’t start a political party with support from 0.01% of the population!
In general, we should do things that don’t scale but are optimal right now, rather than things that do scale but aren’t optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.
I would be extremely interested if you were to hypothetically write an “intro to child protection/welfare for EAs” post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)
Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.
“Cause X” usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I’d expect a cause which the EA community decided was “cause X” to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person’s take.)