Among EAs, it has become de facto or even obligatory to “buy” Longtermism.
I don’t really agree with this, there are plenty of shorttermist EAs and the majority of EA funding goes to shorttermist causes.
Among EAs, it has become de facto or even obligatory to “buy” Longtermism.
I don’t really agree with this, there are plenty of shorttermist EAs and the majority of EA funding goes to shorttermist causes.
Thanks for this thoughtful piece.
One thing that I worry about when communicating on LinkedIn about EA affiliations is do-gooder derogation. Currently I am in a position where I don’t have to worry about that as much, but I can easily imagine that employers might feel they can make a lowball offer to someone who is “just going to give it to charity anyway” (versus spend it on private school or whatever). For this reason I set my EA group affiliations on LinkedIn to private.
Malaria nets only last 3 years anyway, their direct impact does not require the world to last longer than that (although, perhaps you value saving a life less, if you think the world will soon end).
According to justdone.com, this post is 89% AI content, and it certainly reads that way to me.
Thanks Nick for your thoughts. I’ve read Poor Economics but I’m not sure that the arguments there apply to this situation.
I definitely agree that the poor struggle with purchases that require capital investment, such as buying in bulk. My sense is that partly that is because capital is just vary scarce, and partly that is because of pressure to share any accumulated capital. But the menstrual cups aren’t very expensive and would be a one-time purchase, so I’m not sure that argument applies here; although I suppose that in a sense the cup is the bulk version of disposable products.
I am less familiar with the phenomenon of selling crops out of season. Could this be a social pressure thing where everyone feels compelled to invest in the village saving group?
And, I definitely agree that preventative measures are a tough sell to the poor, pretty much across the board. That is why we fund free bednet distribution, by the way! In fact in general diffusion of preventative innovations is quite slow, which is one reason why they are often subsidized or compelled by governments. Insofar as the menstrual cups constitute a preventative, I absolutely agree we should not expect the poor to buy them. But, your analysis suggests the main benefit is financial, and with a pretty quick return on investment, so this situation seems different from that.
Do you think Living Goods is well positioned to deal with those practicalities? If not, why not?
FWIW I doubt there are many (any?) EAs that would advocate for reallocating “ all arts funding to top GiveWell charities”. Everything is at the margin!
Nick, thank you for studying this and for sharing your findings. I do think there’s probably something to this space.
That said, I think you know probably more than anyone in EA how good people in poverty are at saving money. Don’t you think it’s unlikely that there is an option available that could save tens of dollars a year, which people are not taking advantage of on their own initiative? I doubt that achieving sufficient capital is the issue (as for example with a tin roof) if we’re talking about a $7, or even $2 product.
Maybe another way of framing this is, why do you think that this is a market failure / why do you think the free market is not addressing this on its own?
I would be very curious to know if Living Goods has looked at this; I know that at one point at least they used to sell reusable pads. It seems like an obvious fit to me, and a much lower risk one than distribution for free.
Let us know what you find.
My understanding is that some electric and water utilities did a similar thing in the early days of the pandemic, for the same reasons.
If you are wondering how this change was received by the media, well
Bloomberg: Anthropic Drops Hallmark Safety Pledge in Race With AI Peers
The Wall Street Journal: Anthropic Dials Back AI Safety Commitments
Gizmodo: Anthropic Rolls Back Safety Protocols as It Waits to Find Out If It’s Being Drafted by the Army
Engadget: Anthropic weakens its safety pledge in the wake of the Pentagon’s pressure campaign
I thought this was video game jargon!
I think putting yourself out there in a database is a good idea, as is finding recruiters that can introduce you to opportunities.
As far as rejections… I think there is this common mindset that you just need to grind away at applications until you eventually make it through, but personally I think it’s more likely that if one is being rejected from >90% of applications, that is a sign that something is wrong. I feel like these two are the most common (but there could be many others):
The applicant is not actually qualified / not a good fit for these jobs, and should apply to other different jobs that are a better fit.
There is some problem with the way the applicant is presenting during the application process (for example, a problem with the resume, or unprofessional interview performance); in this case, the applicant should try to figure out what the problem is and fix it.
Is there a reason why the first sentence of this post would not suffice (even if perhaps moved to the end of the document)?
Hmm, I’m not confident that Bob is wrong here. It seems to me that there’s a quite plausible argument that EA’s involvement in AI has been net-negative, possibly so net-negative as to cancel out all of the rest of EA. You seem to assume that this was knowable in advance, but that’s not necessarily so.
Your argument seems to assume that one should “shut up and multiply” and then run with that estimated EV number; but there have been many arguments on this forum and elsewhere about why we shouldn’t trust naive EV estimates.
Median household (not personal) income in the bay area is well under $200,000, so I disagree that $600k is not doing “extremely well”.
However, I personally believe that most EA executives earning in the mid six figures could easily earn even more if they were to move to the private sector.
Just a note that in 2013, Google’s headcount was a little less than 50,000 people, so we are talking about a completely different scale from any EA or EA-adjacent organization. When you are hiring at scale, you can afford to take more risks on any given hire.
Personally I don’t think Sam Altman is motivated by money. He just wants to be the one to build it.
I sense that Elon Musk and Dorio Amodei’s motivations are more complex than “motivated by money”, but I can imagine that the actual dollar amounts are more important to them than to Sma.
TBH my sense is that GiveWell is just being polite.
A perhaps more realistic motivation is that admitting animal suffering into GiveWell’s models would implicitly force them to specify moral weights for animals (versus humans), and there is no way to do that without inviting huge controversy leaving at least some groups very upset. Much easier to say “sorry, not our wheelhouse” and effectively set animal weights to zero.
FWIW I agree with this decision (of GiveWell’s).
My understanding is that there is an extensive body of evidence that people become more rational and put in more cognitive effort when there are real-money stakes involved; but I would welcome commentary from someone more familiar with the literature.