YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals. I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products. The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either...
David T
Even if one takes the midpoint of the RP intervals as established fact, there are a lot of other assumptions Vasco’s arguments depend on, like the magnitude and duration of suffering a particular creature experiences with pain scales with thousands of points to cancel out the RP weights, and the cost-effectiveness of brand new charities in a field (campaigning) where marginal cost-effectiveness is relatively difficult to measure.
Unlike for RP we don’t have published estimates of distributions or confidence intervals for these, but if we did they’d also be extremely wide and I’m not sure that animal welfare interventions would look better across most of the distribution for them.
Tax isn’t “wasted” by making money vanish from the economy though (except for the deadweight loss) it’s just redistributed to other people via payouts, jobs, loans or indirectly via goods purchases. Statistically, some of these beneficiaries will enjoy longer lives through the same indirect income-mortality relationship you invoke to associate taxes with death. This is true even of public spending which is—relative to others—extremely wasteful and not evaluated as lifesaving even by its proponents.[1]
Which is why I’d argue it makes far more sense to focus cost-benefit analysis on deadweight losses and [counterfactual alternative] uses of public funds. Because regardless of whether the tax is focused on creating “good things” or not, the net result of the transfer probably isn’t killing people..
In a developed country with a progressive tax system, the demographics paying most of the tax are unlikely to typically need the income to survive more than state employees or other [indirect] recipients the resulting public expenditure benefits, even for ridiculous ideas like paying millions of dancers to create synchronised tributes to the president. So ignoring extremely indirect and difficult to quantify transfer effects (or explicitly treating them as netting out to zero) in favour of focusing on direct effects and deadweight loss in cost-benefit analysis probably if anything is biased against tax and spend. Empirically, tax burden is positively correlated with longevity, even amongst US states.
- ^
paying superfluous bureaucrats may be an inefficient way of saving lives, but in exactly the same way as taxing people is a very inefficient way of killing, especially where the tax is progressive above affordability thresholds and targeted benefits/rebates exist
- ^
I didn’t vote, but it is an article about the “mortality cost of taxation” which imputes significant mortality whilst completely disregarding the expected mortality reduction of the tax money being spent back into the economy, and the likelihood that the redistribution is net positive. It appears the author acknowledged that bit after getting the negative response. (I also think the estimation methodology is flawed, which is very well explained by Soemano Zeijlmans’ comment)[1]
Sure, it’s not in dispute that taxation, like giving to overseas charity, can result in economic deadweight loss, and ceteris paribus economic deadweight loss can lead to lower life expectancy. A really interesting paper might even have explored this (most likely coming to the conclusion that the benefits of redistribution to people on very low incomes and funding health and education vastly exceeded deadweight loss in modern welfare states, but it would be wise to ensure that very little of the tax burden fell on the poor and to minimise government waste). But if you write an article which is the equivalent of “the mortality cost of overseas aid” which skips the bit where actually overseas aid is pretty good at reducing mortality it’s probably going to get some pushback, especially here.
I don’t think it’s “significant bias” talking, because I suspect an article about the “mortality cost of markets” which focused on the role of markets in depriving people of stuff and disregarded their role in getting stuff produced altogether would accumulate disagree votes here too.
- ^
though tbf I’ve seen other flawed methodologies estimate significantly higher mortality cost
- ^
Aside from these complications I also don’t see much if any benefit to regular founders of this “give away equity early” arrangement outside the scope of support from an AIM or a similar social enterprise organization that actually aims to help their business.
Founder’s Pledge pitch to founders (and other HNW individuals) is straightforward: pledge to give away part of your wealth when you think it’s optimal from the perspective of value maximization, donation opportunities, exit strategy, tax efficiency and they’ll present donation opportunities which align with your priorities and approach to evidence.
Whereas this proposal seems to be “give us some of your equity and we’ll decide if, when and what we donate it to”. If founders want a fund to do the allocations for them they’ll find funds that will accept their equity as soon as it becomes liquid anyway, and those funds will be able to make decisions on cashing out and disbursal without the admin costs of being small shareholders in many pre-profit or never to be profitable startups so they’ll probably be more efficient at doing so.
I think there’s probably room for more variations on the Founder’s Pledge model, but I don’t see this proposal being it in its current form.
0.864 s (= 24*60^2/(100*10^3)) of excruciating pain in humans neutralises 1 day of fully healthy life in humans. Do you think this is “unconventional and extremely skewed
Yes. I can’t think of any pain in which I would prefer to die than suffer for 0.864 seconds per day, particularly not if the remaining aspects of my life were “practically maximally happy”.[1]
I find it even harder to imagine that an insect can distinguish between painful sensations to the degree that a pain scale with at least 100k points on it would be appropriate to approximate their welfare on,[2] still less that an appropriate use of such a scale is to multiply a human/insect welfare ratio to conclude that the complete cessation of function of that simple insect nervous system is a few orders of magnitude more intense conscious experience than the “practically maximally happy” (or even average) utility of a human.
- ^
If I did think [potential] sub second pain was as significant as an entire day’s welfare, I would probably not endorse electrical stunning...
- ^
I mean, they’ve only got 200k neurons to divide between all their functions. (This isn’t an argument for neuron count being a good proxy for moral weights overall, merely an observation of how extreme the pain scale looks in the context of how simple the insect’s system for parsing stimuli appears to be)
- ^
For my guess that excruciating pain is 100 k (= 10*10^3/0.1) times as intense as fully healthy life, the 119 mosquito-seconds of excruciating pain per mosquito killed by ITNs neutralise 138 mosquito-days (= 119/60^2/24*100*10^3) of fully healthy life, or 1.79 human-days (= 138*0.013) of fully healthy life based on RP’s median welfare range of black soldier flies.
Thanks for that clarification. So essentially your claims rest on the utility value of over a day and a half of human life being lower than that of two minutes of a dying insect.
Two comments here:
this, like some of your other estimates relies rather heavily on an unconventional and extremely skewed pain scale, whereby a certain degree of pain is worth many times more than maximal pleasure[1], as well as confidently attributing that maximal degree of pain that vastly exceeds the pleasure experienced by more complicated creatures to a particular scenario
I ’m not sure this is actually how RP intend their welfare ranges to be used. My understanding (and I welcome clarification/correction from RP on this point) is that when their researchers estimate that $creature’s welfare range is 1.3% that of humans, they intend that to be interpreted as “$creature’s pain sensations are at most 1.3% as intense as human experience”, not “to establish how intensely $creature feels pain, multiply 1.3% by a pain scale which may contain an arbitrarily large number of digits, to reach the conclusion that this creature’s pain is potentially thousands of times as intense as human pleasure.”
I’d also point out that even with those pain scales and welfare ranges, the calculation looks completely different if one also factors in potentially intense human pain from [nonfatal] malaria infections multiple times per year and experienced over several days, with [rare] neurological systems which may persist for the rest of a natural human life. Again, I’m not sure exactly what a pain scale for celebral malaria should look like but I’m unconvinced there are reasons for regarding it as so much less intense than mosquito pain it can be disregarded when comparing between species.
- ^
I recognise that extremely wide-ranging and asymmetric pain scales are convenient to pure hedonic utilitarians who might otherwise be troubled by philosophical problems like utility monsters or trading off a single torture for a speck of dust in my eye: I just think they’re unusual positions not well supported by evidence.
Which 2 min of pain in mosquitoes are you referring to?
The 2 minutes corresponds to the estimated 119 seconds of estimated excruciating pain per mosquito death in the aggregate estimate in your spreadsheet, comprising nearly all the estimated utility loss.
I do not kill mosquitoes or other insects inside my house, but I guess quickly crushing insects causes them much less pain than ITNs.
It was less about your personal footprint and more about the spiders. I once lived in a place by a river an enormous quantity of insects were attracted by any sort of light bulb, which was where the spiders liked to dine out (unless they were deterred with peppermint spray or their cobwebs repeatedly swept away). A web full of wriggling flies wasn’t a particularly attractive sight, but I’m disinclined to believe that web was experiencing utility loss far more significant than anything going on in my life[1] But since you are arguing a few minutes of a single insect ingesting a neurotoxin may be of extremely high negative value, keeping spiders away from insects using cheap peppermint spray seems like an highly net positive form of harm reduction worth considering?
My assumption was somewhat informed by a trip I did to Moshi (Tanzania) in early 2020. There were certainly more than 1 mosquito bitting me per hour during dust, and I was using repellent if I recall correctly
Outdoors at dusk is peak mosquito time though, and 2-3 mosquitos are capable of a lot of bites. I would imagine you had access to some sort of treated nets, and didn’t have have to clean 20 or 30 dead mosquitos off the floor every day?
- ^
I’d have a particularly hard time believing insects had evolved a complex and intense appreciation of neurological pain whilst far more useful traits like navigation were as simplistic and mechanistic as repeatedly flying into a light source…
- ^
If I’m understanding your calculations correctly, the underlying assumption is that the pain you estimate a mosquito to experience for two minutes has the same weight as an entire afternoon of incomparably blissful human existence even taking into account the cognitive differences between a human and a mosquito? There doesn’t appear to be an obviously correct way to weight the relative intensity of experience of a human and a mosquito, but this one seems like an outlier; typically arguments for considering insect suffering depend on them being more numerous rather than their individual suffering being orders of magnitude more intense than human enjoyment. In all seriousness, if you do attach such high weights to the possible suffering of individual insects, I highly recommend nontoxic spider repellent, especially around your light fittings as an extremely cost effective intervention.
Some of your more quantifiable estimates also seem selected to be particularly unfavourable to humans. For example, the robustly established fact that humans experience days of pain from malaria infections, (including the vast majority of malaria infections which are nonfatal) is disregarded. Medical literature evaluating anti-malaria interventions often focuses on mortality rather than morbidity too, but it’s not weighing up human DALYs against a few minutes of mosquito morbidity! Likewise, the assumption that a typical ITN is killing an average of 24 mosquitos per day seems to depend on an inflated number mosquitos per dwelling, even before the mild repellent effect and low killing efficiency of fleeting contact with the nets is considered.
Agree that I don’t think anticapitalism is tractable for anticapitalism’s sake, never mind as a solution to specific AI companies’ behaviour.
I also agree that it’s worth understanding why various anticapitalist regimes—most obviously Marxist-Leninism - failed[1]. Perhaps the biggest cautionary tale is that self-styled “revolutionary” Marxist-Leninist regimes ultimately evolved from extrapolating trends into a theory of a big, happy future after radical change. But two of the biggest reasons Marxist-Leninism failed, corruption and authoritarianism, are not unique to socialism, and the most distinct reason (centrally planned economies didn’t allocate enough resources to serving consumer demand and rewarding work ethic and ingenuity) lose significance in a hypothetical long term future in which resources aren’t particularly scarce, human work ethic isn’t a constraint and maybe the “ingenuity” comes from computer-based processes you probably don’t want to restrict to capital owners. If you think an era of AI-driven growth taking us close to post-scarcity is near,[2] it probably doesn’t make much sense to worry about preserving the twentieth century growth model. But at the same time, also too early to experiment with putative replacements.
I’d add that to the extent conscious experience can be considered “self evident” only one’s own experience of pain and pleasure can be “self evident” via conscious experience.
If Nunik’s contention is that only things which achieve that experiential level of validation can be assigned intrinsic value with intuitions carrying zero evidential weight, it seems we would have to disregard our intuitions that other people or creatures might have similar experiences, and attach zero value to their possible pain/pleasure.
I mean, hedonic egoism is a philosophical position, but perhaps not a well-regarded one on a forum for people trying to be altruistic...
Feels unlikely either that it would create an actually valid natural experiment (as you acknowledge, it’s not a huge proportion of aid, and there are a lot of other factors that affect a country) or persuade people to do aid differently.
Particularly when EA’s GHD programmes tend to be already focused on stuff which is well-evidenced at a granular level (malaria cures and vitamin supplementation) and targeted at specific countries with those problems (not all developing countries have malaria), by organizations that are not necessarily themselves EA, and a lot of non-EA funders are also trying to solve those problems in similar or identical ways.
Also feels like it would be a poor decision for, say, a Charity Entrepreneurship founder trying to solve a problem she identified as one she could make a major difference with based on her extensive knowledge of poverty in India deciding to try the programme in a potentially different Guinean context she doesn’t have the same background understanding of simply because other EAs happened to have diverted funding to Guinea for signalling purposes.
I think the bigger question as you’ve summarised at the end is “what effect would EA spending in these areas have”, particularly with GHD being very focused on measuring marginal impact and having some robustly data-supported alternatives. The reason why EA orgs have targeted lobbying more in other areas is the perception they’re neglected politically. Trade policy and migration aren’t and I don’t think EA is a bigger or more politically palatable tent than the people already promoting such ideas....
The virtues of free(r) trade, after all, is a 250 year old economic argument, and one of the least contested arguments amongst people that have actually studied it. Technocrats working in government learn it in their undergrad economics classes, think tanks with various degrees of partnership widely promote it. It’s about the only point of agreement between George Soros and the Koch brothers, the world’s biggest spending donors to economic policy lobbying. It’s difficult to see where EA funding changes anything and the context (Trump promoting protectionism on the [incorrect] assumption that if it hurts foreigners it’ll help Americans) couldn’t be a worse time to contemplate shifting in emphasis from trade benefiting everyone to the even less contested fact it benefits foreign exporters in poor countries. Maybe a lobbyist in the Brussels could get a slightly more sympathetic hearing from people running that free trade bloc, but the idea that the EU’s external tariff policies harm the developing world isn’t one they haven’t heard before, and they have entrenched interests in protected industries to look after and full blown negotiations with other countries when they;’re looking at lowering external trade barriers too.
Immigration is something facing even louder political pushback in more countries, and the basic fact that immigration tends to benefit immigrants’ families is even less contested[1], even by people who don’t like immigrants or the idea of immigration very much. But ultimately there are a lot of people in those groups (whilst ironically, the self-interests of politicians they elect on a “tough on migration” platform keep employment visas open). I guess that’s evidence you’re right that changes can be made at the margin without [or despite] large political debate, but they’re keeping guest worker programmes because they’re worried about the consequences of lack of guest workers, not because they feel compelled to offer routes out of poverty and the accompanying lobbying is likely to reflect that. The travel visa issue fits into that same “uphill struggle” bracket. I think there’s potential to make a huge difference to help individuals’ experience with immigration at the margin (not necessarily cost effectively, although making it self-financing is a possibility) but it seems like a losing cause in advocacy terms
Looking at remittance barriers might be more neglected, but I’m under the impression that the main factors pushing those remittance fees as high as 6-7% aren’t regulatory, they’re the “last mile” to typically unbanked people often in remote areas. That imposes service costs and tends towards natural monopoly (except where it can be avoided e.g. with M-PESA). I think some of those fees trickle into local remittance company offices and agents in villages anyway. That doesn’t mean there isn’t scope for bringing those fees down but I don’t know how easy something like the US-Mexican system is to implement in practice. I think some of the implementation barriers aren’t on the developed country side…
- ^
with all due respect to “brain drain” arguments that are probably reasonable for aggregate impact in some sectors
- ^
I agree with the general point that Zuckerberg is too committed to being Facebook boss to give much of his stock away now, but he and his wife put $2b in Facebook shares into his own foundation, which isn’t particularly EA inclined (either explicitly or broadly). That’s less than Moskovitz-Tuna from a bigger chunk of wealth but it’s non-trivial, and certainly enough to show he’s not taking his most of cues from them.
I don’t consider this to be any sort of failing on Dustin’s part (I don’t expect my bosses to listen to my donation philosophy if they 100x their current net worth either, even though they definitely have some points of agreement with me and trust my judgement on some things) and think the more salient question is “why have so few people that are not Mark Zuckerberg but are also vaguely in the orbit of EAers donated to EA causes compared with other causes”
As for SBF, his “Future Fund” was less than FTX committed to stadium sponsorship, so I don’t think the desire to top that up can be blamed for his recklessness (even if the broader conceit that everything he did was for the greater good was). It’s absolutely possible to give significant amounts to philanthropic causes (EA or otherwise) and retain control of a business without being Sam.
“Conservative” can mean a lot of things to a lot of different people worldwide.
For people whose conservatism is rooted in sets of religious beliefs, it seems that there are already are initiatives (for Christianity and Islam at least) to encourage them to associate their religious principles with EA principles. The average EAer might be more likely to read New Atheist blogs than attend church, but I don’t get the impression that anyone’s doing anything to actively discourage religious people’s participation or that their funding hurdles are higher (I could be wrong on this) . Limiting factors are likely to be that religious conservatives already have their own philanthropic movements, and sometimes conflicting ideas, and are far more likely to engage on a “these orgs are cost effective ways of saving lives” level than a “let’s do lots of hedonic utility calculations and speculate about man creating superintelligences in their own image” level.
From the point of view of the individualist, market-oriented right, I think EA is fairly firmly rooted in that already. It’s a movement which values prediction markets, touts billionaire philanthropy as a solution to global problems and has relatively little interest in redistributive social policies and minimal interest in capital allocation. The area where EA farmed animal welfare seems to stand out from other animal welfare movements is that some organizations are willing to work with commercial farmers (and it’s totally receptive to supporting meat-culturing). EA might not appeal to the subset of the economic right that thinks that markets are so perfect at allocation that it would be wrong to suggest that poor people deserve a chance of handouts or that rich people should feel some sense of obligation to donate, but I’m not sure there’s much point in trying there!
Likewise, for the “Chesterton’s fence” type conservatives who are principally cautious that drastic changes to the status quo might be harmful EA really sounds like the movement for them![1] An incrementalist approach to lifting people out of poverty and protecting animals and existential risk reduction as a major cause area.[2] That might actually be a growth area, but I’ve no idea how to reach those sort of people.
For nationalists, EA’s assumption that people are fundamentally equal wherever they are might be a sticking block (and yet oddly isn’t when it comes to the likes of Hanania and the HBD set who find other aspects of EA weirdness interesting) but I’m not sure that toning down that message to appeal to people who worry that the movement isn’t “$Country First” enough would be a positive step (or that EA has much to say about the national defence strand of conservatism). Technically “EA but local” could nevertheless become a worthwhile thing (independently from a mainstream conservative-liberal dichotomy), but I suspect programmes to help the poorest people locally would outrage nationalists shouting about borders and trade wars as much as it would attract them...
And lastly, trying to appeal to the sort of partisans whose attraction to EA would be based on it publicly echoing support for local [notionally] conservative political figures and dislike of the liberal/left party and selected groups of Bad People would be obviously pointless and counterproductive, but I’m pretty sure that wasn’t what was suggested here.
- ^
a far better fit than the populist right which seems to have the precise opposite set of priorities...
- ^
on the other hand they might be more sceptical than average of EA’s futurism. But they’re definitely not the only people that feel that way about EA’s futurism, and “appeal to conservatives” isn’t the most compelling argument for shifting that emphasis.
- ^
I used to think the same, but now I see that many GWWC pledgers and donors mention 80k as the reason why they’re pledging or donating, often to neartermist causes.
How many of them have made that choice recently though? I know 80k still talks about earning to give (which IIRC it was once the major proponent of) and Givewell recommended charities in its intro and hosts all sorts on its podcasts and job boards, but its “recommended careers” is basically all longtermism (or EA community/research stuff) and 80k are explicit on what their priorities are and that this doesn’t include “neartermist” causes.
So I don’t think it’s surprising that Rutger doesn’t recommend them if he doesn’t share (or even actively disagrees with?) those priorities even if his current focus on persuading mid-career professionals to look into alternative proteins and tobacco prevention sounds very EA-ish in other respects. I’m curious whether he mentioned ProbablyGood or if he’s even aware of them?
Think the main reason it doesn’t get talked about much is that impoverishing other countries was baked into the whole “America First” idea in the first place, including the [obviously incorrect] beliefs that trade is essentially zero sum so making these countries poorer is necessary to make Americans richer. But Trump also got votes from a lot of Americans whose main concern was rising prices, so it’s particularly salient that the first major effect of blanket tariff increase on consumer goods will be their cost of living going up...
(I think also the effects of US tariff levels on the typical <$2 a day person are relatively indirect: most of them aren’t involved in direct exports to the US from countries likely to be major tariff losers, especially if he turns out to be far more interested in restricting imports of Chinese manufactured alternatives to US luxury goods than cheap foodstuffs. Lower global economic output will slow their local economies down too, but that impact feels less tangible, and to an extent is balanced out by other factors like China’s increased interest in trading with the global South and whatever happens to energy prices.)
“Has he been net positive for humanity overall” would be be clearer that it’s looking at everything he’s done so far
But I actually think it’s more interesting if it’s an ambiguous question. The stuff he’s done so far is significant but not necessarily aligned with what he’s doing now and what he might do or intend to do in future. The trajectory he is on now is… not upward. The influence that he has now isn’t necessarily more than when few people knew who he was, and he sounded more strategic as well as more amicable then. And the stuff he may or may not do in future is speculation.
Yep. I agree it can be interpreted in other ways and would agree with you that taking everything into account he’s probably had more impact at the margin on the positive stuff than the negatives, so far. There was certainly a bigger shortage of people with the means and the motivation to take on EVs and commercial space in the early 2000s[1] than peopl willing to spout stupid stuff on social media in the last three years.
- ^
I think battery and photovoltaics were coming down in manufacturing cost over the last decade regardless, but you don’t automatically get complex products out of that...
- ^
Not really. YC doesn’t just care about percentage of value capture, it also cares about the total amount of value available to capture. This tends towards its target market being deep-pocketed corporations and consumers with disposable income to spend on AI app platforms or subscriAI tools for writing better software, and completely ignoring the Global South and people who don’t use the internet much.
AIM cares about the opposite: people that don’t have access to basics in life and its cost-effectiveness is measured on non-financial returns
But if the advice is bad it might actually be net negative (and AI trained on an internet dominated by the developed world is likely to be suboptimal at generating responses to people with limited literacy on medical conditions specific to their region and poverty level in a language that features relatively little in OpenAI’s corpus). And training generative AI to be good at specialised tasks to life-or-death levels of reliability is definitely not cheap (and nor is getting that chatbot in front of people who tend not to be prolific users of the internet)
Unlike many EAs, I agree that the threat to humanity posed by ChatGPT is negligible, but there’s a difference between that and trusting OpenAI enough to think building products piggybacking on their infrastructure is potentially one of the most effective uses of donor funds. Even if I did trust them, which I don’t for reasons EAs are generally aware of, I’m also not at all optimistic that their chatbot would be remotely useful at advising subsistence farmers on market and soil conditions in their locality.
And especially not remotely confident it’d be better than an information website, which might not be VC-fundable, but would be a whole lot cheaper to create and keep bullshit-free
I agree this is also a significant factor