I’m just a normal, functioning member of the human race, and there’s no way anyone can prove otherwise
Matt_Sharp
Some excellent points.
In addition, I’m confused about the figure of $5-10m for spending on alcohol. This is roughly how much is spent by just two alcohol charities in the UK (Drinkaware and Alcohol Research UK). So global philanthropic spending on alcohol is presumably much higher—and then there’s also any government spending.
Perhaps the $5-10m figure is supposed to only apply to low and middle income countries, or money moved as part of development assistance for health?
I’m no longer going to engage with you because this comes across as being deliberately offensive and provocative.
Assuming that first claim is true, I’m not sure it follows that deferred donation looks even better. You’d still need to know about the marginal cost-effectiveness of the best interventions, which won’t necessarily change at the same rate as the wider economy.
The cost-effectiveness of interventions doesn’t necessarily stay fixed over time. We would expect it to get more expensive to save a life over time, as the lowest-hanging fruit should get picked first.
(I’m not definitely saying that it’s better to donate now rather than investing and donating later—the changing cost-effectiveness of interventions is just one thing that needs to be taken into account)
Points (1) and (3) relate to the value of the intervention rather than the value of the life of the beneficiary. If the intervention is less likely to work, or cause negative higher-order outcomes, then we should take that into account in any cost-effectiveness analysis. I think EA is very good at reviewing issues relating to point (1). Addressing point (3) is much trickier, but there is definitely some work out there looking at higher-order effects.
Point (2) relates to the difference between intrinsic and instrumental value (as previously noted by Richard). From a utilitarian perspective, it seems accurate that the economic productivity is an instrumental reason for favouring saving lives in wealthier countries.
However, this is not the only consideration when deciding where to donate. Firstly, it is typically much more expensive to save a life in a wealthy country, precisely because it is a wealthy country with relatively well-funded healthcare. Secondly, there are consequences beyond economic productivity. For example, people in wealthier countries may be more likely to regularly eat factory-farmed animals and contribute to climate change (on the other hand, because they are in a wealthier country with more resources, perhaps they are more likely to help solve these issues while also contributing to them).
This is a useful analysis, and collectively I agree it suggests there has been a negative impact overall.
However, I think you may be overly confident when you say things like “FTX has had an obvious negative impact on the number of donors giving through EA Funds”, and “Pledge data from Giving What We Can shows a clear and dramatic negative impact from FTX”.
The data appears to be consistent with this, but it could be consistent with other explanations (or, more likely, a combination of explanations including FTX). For example, over the past couple of years there has been very high inflation across many countries, and a big drop in the value of many cryptocurrencies. Both might be expected to reduce the number of donors and the amount they donate.
Just a guess, but I assume the Nobel Peace Prize is typically given for more sustained behaviour over months/years, rather than one-off actions.
To the extent that this is based on game theory, it’s probably worth considering that there may well be more than just 2 civilizations (at least over timescales of hundreds or thousands of years).
As well as Earth and Mars, there may be the Moon, Venus, and the moons of Jupiter and Saturn (and potentially others, maybe even giant space stations). As such, any unwarranted attack by one civilization on another might result in responses by the remaining civilizations. That could introduce some sort of deterrent effect on striking first.
I think the purpose of the ‘overall karma’ button on comments should be changed.
Currently, it asks ‘how much do you like this overall?‘. I think this should be amended to something like ‘how much do you think this is useful or important?’.
This is because I think there is still a too strong correlation between ‘liking’ a comment, and ‘agreeing’ with it.
For example, in the recent post about nonlinear, many people are downvoting comments by Kat and Emerson. Given that the post concerns their organisation, their responses should not be at risk of being hidden—their comments should be upvoted because it’s useful/important to recognise their responses, regardless of whether someone likes/agrees with the content.
This is a very helpful post. I’m surprised the events are so expensive, but breakdown of costs and explanations make sense.
That said, this makes me much more skeptical about the value of EAG given the alternative potential uses of funds—even just in terms of other types of events.
As suggested by Ozzie, I’d definitely like to see a comparison with the potential value of smaller events, as well as experimentation.
Spending $2k per person might be good value, but I think we could do better. Perhaps there is an analogy with cash transfers as a benchmark—what event could someone put on if they were just given that money?
For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that’s $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I’d bet they’d get more than 1% of the benefit.
Now what if 10 or 20 people pooled their $2k per person?
Nice study, thanks for sharing!
Environmental and health concerns were found to be of increasing importance among those adopting their diet more recently, which may reflect increasing awareness of and advocacy regarding possible health benefits of plant-based diets, as well as increasing concerns over anthropogenic climate change
Could this also be due to survivorship bias? If environmental/health motivations are associated with giving up being veg*n sooner than animal welfare motivations, then in cohorts that adopted their diet longer ago, relatively more of the environmental/health motivated people would have dropped out compared to more recent cohorts.
It costs time to read it! Do you happen to know of a 10 minute summary of the key points?
I’d also note that hundreds of billions of dollars are spent on biomedical research generally each year. While most of this isn’t targeted at anti-aging specifically, there will be a fair amount of spillover that benefits anti-aging research, in terms of increased understanding of genes, proteins, cell biology etc.
Thanks for sharing!
Our funding bar went up at the end of 2022, in response to a decrease in the overall funding available to long-term future-focused projects
Is there anywhere that describes what the funding bar is and how you decided on it? This seem relevant to several recent discussions on the Forum, e.g. this, this, and this.
Sounds like he’d be good to have at the debate! But it seems very unlikely he’ll make the first one in a few weeks time. There seem to be 3 requirements to qualify for the first debate:
Pledge support for the eventual nominee. Hurd has said he won’t do this.
(from 538) “they must earn 1 percent support in three national polls, or in two national polls and two polls from the first four states voting in the GOP primary, each coming from separate states, based on polls recognized by the RNC and conducted in July and August before the debate.”
“As of Sunday [July 23rd], he had only one qualifying poll to his name...”
(from 538) “Meanwhile, a candidate must also attain at least 40,000 unique donors, with at least 200 contributors from 20 or more states and/or territories.”
″..and said last week that he was about one-fifth of the way to 40,000 contributors”
It sounds like he needs a big boost from somewhere—maybe if e.g. Elon Musk were to tweet about him and endorse his position on AI that would get him there (and convince him to change his mind re 1, though I’m not sure briefly speaking about AI alignment justifies this)?!
Re 2 - ah yeah, I was assuming that at least one alien civilisation would aim to ‘technologize the Local Supercluster’ if humans didn’t. If they all just decided to stick to their own solar system or not spread sentience/digital minds, then of course that would be a loss of experiences.
Thanks for clarifying 1 and 3!
Interesting read, and a tricky topic! A few thoughts:
What were the reasons for tentatively suggesting using the median estimate of the commenters, rather than being consistent with the SoGive neartermist threshold?
One reason against using the very high-end of the range is the plausible existence of alien civilisations. If humanity goes extinct, but there are many other potential civilisations and we think they have similar moral value to humans, then preventing human extinction is less valuable.
You could try using an adapted version of the Drake equation to estimate how many civilisations there might be (some of the parameters would have to be changed to take into account the different context, i.e. you’re not just estimating current civilizations that could currently communicate with us in the Milky Way, but the number there could be in the Local Supercluster)
I’m still not entirely sure what the purpose of the threshold would be.
The most obvious reason is to compare longtermist causes with neartermist ones, to understanding the opportunity cost—in which case I think this threshold should be consistent with the other SoGive benchmarks/thresholds (i.e. what you did with your initial calculations).
Indeed the lower end estimate (only valuing existing life) would be useful for donors who take a completely neartermist perspective, but who aren’t set on supporting (e.g.) health and development charities
If the aim is to be selective amongst longtermist causes so that you’re not just funding all (or none) of them, then why not just donate to the most cost-effective causes (starting with the most cost-effective) until your funding runs out?
I suppose this is where the giving now vs giving later point comes in. But in this case I’m not sure how you could try to set a threshold a priori?
It seems like you need some estimates of cost-effectiveness first. Then (e.g.) choose to fund the top x% of interventions in one year, and use this to inform the threshold in subsequent years. Depending on the apparent distribution of the initial cost-effectiveness estimates, you might decide ‘actually, we think there are plenty of interventions out there that are better than all the ones we have seen so far, if only we search a little bit harder’
Trying to incentivise more robust thinking around the cost-effectiveness of individual longtermist projects seems really valuable! I’d like to see more engagement by those working on such projects. Perhaps SoGive can help enable such engagement :)
Assuming it could be implemented, I definitely think your approach would help prevent the imposition of serious harms.
I still intuitively think the AI could just get stuck though, given the range of contradictory views even in fairly mainstream moral and political philosophy. It would need to have a process for making decisions under moral uncertainty, which might entail putting additional weight on the views on certain philosophers. But because this is (as far as I know) a very recent area of ethics, the only existing work could be quite badly flawed.
Under that constraint, I wonder if the AI would be free to do anything at all.
Thanks for clarifying!
Interesting point about Drinkaware—I didn’t know it was partly industry-funded. Given this, even though I’d hope the information they provide is broadly accurate, I’m assuming it is more likely to be framed through the lens of personal choice rather than advocating for government action (e.g. higher taxes on alcohol).
I presume the $5-10M also only refers to alcohol-specific philanthropy? I would expect there to be some funding for it via adjacent topics, such as organisations that work on drugs/addiction more broadly, or ones that focus on promoting nutrition and healthy lifestyles.