Hey! I am Mart, I learned about EA a few years back through LessWrong. Currently, I am pursuing a PhD in the theory of quantum technologies and learning more about doing good better in the EA Ulm local group and the EA Math and Physics professional group.
Mart_Korz
Thanks! I somehow managed to miss their pledge when looking at their website.
It does seem to be somewhat different from e.g. the 10% pledge: At least according to the current formulation, it is not a public or lifetime pledge but rather a one-year commitment:
This coming year, I pledge to give [percentage field] of my income to organizations effectively helping people in extreme poverty.
As an approach, this sounds very reasonable for TLYCS. I imagine that a lifetime pledge that adjusts to one’s income level is something that is harder to emotionally resonate with and might feel like too-large a commitment for many. And recommitting to the pledge yearly might make it easy for people to adjust their giving to their current income without needing to think about the maths.
Your table is very similar to the one Peter Singer presents close to the end of The Life You Can Save. I do like the mathematically clarity of your approach, and we now have an intuitive explanation for where the numbers come from—very nice!
Honestly, now that I write this it seems strange that there is no Pledge that matches this way of thinking? Your suggestion could be a nice complement to the existing 10% and Further Pledge and it now seems a little strange that I have not heard about people actively donating according to Singer’s table. Maybe it is just a lot harder to explain than the other pledges, and this kind of defeats the point of a public pledge?
In case someone else wonders what the donation rate would look like for other choices of :
I think a pledge which puts the bar for zero donations at
the US-median income[edit: oops, I got that wrong and it is not the median income value. I still think that my point is directionally right] is a little strange – even in the US, many pledgers would never reach non-zero pledged donations and this seems off for a pledge that has income-sensitivity as a central property. Intuitively, a softer rule like “in any case, aim for 1% donations unless you really need all income” or a different reference for the zero-donations bar would be more wholesome.
A very cool idea and nice implementation, thanks for sharing! I sympathize a lot with the idea and “we take responsibility to correct negative consequences of our actions where feasible” could be a good norm to coordinate around.
Some comments
I want the website to give confident, simple numbers so people don’t have to think about the uncertainty ranges. I want them to be given a simple number with no uncertainty, and explain the uncertainty further into the site if they dig further.
I think this makes a lot of sense! It can be tricky to get the balance right and at the moment I think some formulations that try to emphasize clarity err in the direction of being too confident—but I fully understand that it is a lot of effort to do these things well.
Formulations on the website:
Inspired by effective altruism principles, we’ve identified the four areas where your money can do the most good—and calculated exactly how much it takes to balance your share.
The formulation “the four cause areas where your money can do the most good” seems mistaken? If I got this right, the reasoning is to effectively undo (offset) the harms associated with one’s personal lifestyle for major areas individually. The effectiveness mostly enters when deciding on which org to donate to within the chosen cause area. Of course, to some degree, the cause areas are still chosen with effectiveness in mind but I think that a different formulation could capture the reasoning better. A suggestion might be
“Inspired by effective altruism principles, for each of the major areas where our lifestyle comes along with negative impact, we’ve identified where your money can do the most good—and calculated how much it takes to balance your share.”
Offset your impact
I feel the formulation “undo/offset my impact” is a little unfortunate as, in my mind, impact mainly is related to intended and thus positive consequences. It takes a little extra concentration to realize that in this case I do want to get rid of the impact. On the other hand, I cannot find a similarly short alternative formulation. Maybe “rectify your impact” could work?
Hiring is also hard if you’re progressively eliminating candidates across rounds, because you never can measure the candidates you rejected. The candidate pool is always biased by who you chose to advance already. This makes me feel like I’m never collecting particularly useful data on hiring in hiring rounds. I don’t ever learn how good the people I rejected were!
Isn’t this avoidable? I could imagine a system where you allow a small percentage of randomized “rejected” candidates to the next hiring round and, if they properly succeed in the next, allow them into the third round. I have essentially no experience with how hiring works, but it seems to me that this could i) increase the effort that goes into hiring only moderately, ii) still sounds kind of fair to the candidates, iii) and would give you some information on what your selection process actually selects for.
Holden Karnofsky has described his current thoughts on these topics “how to help with longermism/AI” on the 80000hours pocast – and there are some important changes:
And then it’s like, what do you do to help? When I was writing my blog post series, “The most important century,” I freely admitted the lamest part was, so what do I do? I had this blog post called “Call to vigilance” instead of “Call to action” — because I was like, I don’t have any actions for you. You can follow the news and wait for something to happen, wait for something to do.
I think people got used to that. People in the AI safety community got used to the idea that the thing you do in AI safety is you either work on AI alignment — which at that time means you theorise, you try to be very conceptual; you don’t actually have AIs that are capable enough to be interesting in any way, so you’re solving a lot of theoretical problems, you’re coming up with research agendas someone could pursue, you’re torturously creating experiments that might sort of tell you something, but it’s just almost all conceptual work — or you’re raising awareness, or you’re community building, or you’re message spreading.
These are kind of the things you can do. In order to do them, you have to have a high tolerance for just going around doing stuff, and you don’t know if it’s working. You have to be kind of self-driven.
He goes on to clarify that today, he sees many ways to contribute that are much more straightforward:
[...]. So that’s the state we’ve been in for a long time, and I think a lot of people are really used to that, and they’re still assuming it’s that way. But it’s not that way. I think now if you work in AI, you can do a lot of work that looks much more like: you have a thing you’re trying to do, you have a boss, you’re at an organisation, the organisation is supporting the thing you’re trying to do, you’re going to try and do it. If it works, you’ll know it worked. If it doesn’t work, you’ll know it didn’t work. And you’re not just measuring success in whether you convinced other people to agree with you; you’re measuring success in whether you got some technical measure to work or something like that.
Then, from ~1:20:00 the topic continues with “Great things to do in technical AI safety”
This is an interesting idea, thanks for sharing!
While thinking about your suggestion a little, I learned about RUTF (peanut-based ready-to-eat meals used to treat child malnutrition) which appear to be rather established. Using the UNICEF 2024 numbers they distributed enough RUTF to feed half a million people throughout the year if we go by calories[1]
According to UNICEF:RUTF-price-data a box of 150 meals (92 g which should correspond to ~500 kcal) costs them around $50 (using 2023 numbers and rounding up). Boiling this down to 2 000 kcal per day, this corresponds to a $1.33/day nutrition.
A few important aspects of RUTF are different than your suggestion (aimed at temporary treatment of malnutrition in children and not general nutrition for all ages, the prices above are not consumer prices, main ingredients differ, RUTF avoids the requirement of adding safe water which might not be easily available), but it seems to me that this supports your cost-estimate at least for large-scale purchases.
I think that together with existing consumer brands for meal-replacement powders, RUTF could be a second great reference for a similar and established idea :)
- ^
Of course, RUTF is used as a temporary treatment so that the amount corresponds to a potential of 5.1 million treated children according to the linked dashboard
- ^
yes, totally agree 100% with this article—I have nothing to add.
Thanks for engaging!
This means, specifically, a flatter gradient (i.e., ‘attenuation’) – smaller in absolute terms. In reality, I found a slightly increasing (absolute) gradient/steeper. I can change that sentence.
I don’t think that is necessary—my confusion is more about grasping how the aspects play together :) I’m afraid I will have to make myself a few drawings to get a better grasp.
Very nice, thanks!
If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t.
When I first read this sentence I thought that your argument makes perfect sense, but then when I read
Surprisingly, the relationship appears to get stronger over time. That’s the opposite of what rescaling predicts!
in the Overall happiness section and my first thought was: “well, I guess people are getting more demanding”. And now I am confused. I could imagine thinking about “people don’t settle for half-good any more” as a kind of increased happiness (even if calling it “satisfaction” would be strange).
Independent of this, my personal impression about economic wealth ever since childhood has been that my physical needs are essentially saturated and my social environment is massively more important for my subjective well-being. And although the latter is influenced by wealth, it is much more strongly affected by culture. And although I can think of plenty cultural developments that I am thankful for, I can also think of many that don’t push me in a healthy direction.
I think it is good that (former) CEA has recognized that society has moved beyond distractingly specific names.
After Alphabet, Meta and X took the lead, I am excited about the impact that the Centre will create in adjacency to the EA community.
Update on the giving game contributions towards PlayPumps International: With some delay, we were able to find a way to donate the 2 € which were selected to go to PlayPumps by our participants.
That makes a lot of sense!
Consumers might not know or think much about the health aspects of things
This describes me quite well in many of my health choices, and unfortunately this is apparently really common.
potassium salt is 10x as expensive as normal salt
In my case, I also did not find salt that is pre-mixed at a price that makes sense to me—I bought a pharma-grade bag of KCl and mixed it with usual table salt myself[1], which resulted in a net-price that is 3x of the usual sodium salt.
So it goes back to policy, and whether governments should just regulate sodium content even in salt—we didn’t really explore this, given the higher evidence base and cheapness of salt policies.
That sounds very reasonable—I’ll be looking forward to hearing about updates in the future!
- ↩︎
with the hope that diluting by 1⁄3 will not be too much for the anti-hygroscopic components of the store-bought table salt
- ↩︎
That makes a lot of sense—in practice, there are many relevant considerations and other interventions might well be preferable in many contexts.
The expert opinion
[...] though a Chinese RCT does show positive results, and the current evidence is convincing, still more studies are needed, with the magnitude of benefit not as large as you would think.
also sounds as if potassium-enriched salt surely helps to some degree, but probably isn’t a solution by itself. And I get the impression that research in the coming years will probably improve the uncertainties here.
Apart from this, I am a bit surprised that the costs (“perhaps double the price”) would be a problem for richer countries. If I am understanding this right, this should still be obviously worth it as a health expenditure? A very simple estimate might be:
lost expected life due to high blood pressure: ~2 years (scaling the DALY burden to a single person)
expected gains from switching to potassium-enriched salt: ~1/2 year (I am guessing)
expected costs: 80 years * 2⁄3 kg/year * $10/kg = ~$550
resulting cost-effectiveness (assuming 1 year = 1 DALY): $1100 / DALY averted
Of course this isn’t comparable to GiveWell effectiveness, but it is really cheap compared to other health expenses.
I just realized that I could also just follow the links and found a part of the answer
[...] Another expert is more bearish, noting that though a Chinese RCT does show positive results, and the current evidence is convincing, still more studies are needed, with the magnitude of benefit not as large as you would think. That said, because it’s a substitution of sodium for potassium, there’s a double benefit for cardiovascular health; people don’t consume enough potassium, and potassium lowers blood pressure. And while there is a concern that increasing potassium intake across the population can create risk to people with chronic kidney disease, the evidence is that such people tend to suffer from cardiovascular disease anyway – most hypertension sufferers have higher risk of diabetes/obesity etc.
in section 4.1 1) g)
and also
Of huge interest too is potassium substitution; though evidence of that is fairly new, they think it is a game changer that can accelerate action. They are trying to figure out the name (e.g. potassium-enriched salts) from a public relations perspective. Increasing potassium reduces heart disease – it is an effective strategy. Low sodium salts in general do cost more – perhaps double the price. Then again, Himalayan salts are similarly twice as expensive, yet people still buy it – the challenge is getting the message out there, and that it is good for you (i.e. benefits of potassium); in Australia they are trying to understand the barriers to scaling up. There is research on how to get potash in a scalable way – there is a lot of potassium out there, and only a small amount is food grade (20%), with the rest (80%) used for things like fertilizer.
in section 3.3. Global Salt NGO, point 2.
I am happy to learn that people are working on this :) And it does make sense that the increased price also creates difficulties for adoption. This certainly isn’t a trivial problem. Also, I agree that the public relations perspective is important. Here in Germany, there were large health problems due to missing Iodine, which were reduced by fortifying table salt—but even though the necessity for Iodine hasn’t changed, people/products are starting to use the fortified salt less.
Regarding increasing potassium intake:
A few weeks ago, I heard about this as a good idea via a podcast which claimed that getting closer to the potassium recommendations would remove a large part of the problems of high sodium consumption. I switched my salt to 2⁄3 sodium and 1⁄3 potassium a few weeks ago, and until now I didn’t notice negative effects on taste[1].
Given that potassium is not that expensive, my impression was that a public policy of “everyone, potassium is x% part of table salt from now on” would lead to a large chunk of the benefits without people having to change their taste preferences a lot (by both decreasing sodium by x% and increasing potassium consumption correspondingly). This would increase the prize of salt significantly, which should have similar effects to a sodium tax (the prices would still amount to low single-digit cent costs per day even for high salt consumption).I would be curious about your thoughts on this, given that you have researched this topic a lot more deeply :)
Nonetheless I could still imagine that there are a number of foods with completely excessive amounts of salt for which other interventions would still be a good idea.
- ↩︎
I trust the nutritionist enough to be confident that this change is a good idea for me personally, but I did not look up the sources and I might well have misunderstood the effect-size of increasing potassium consumption
- ↩︎
I am just coming from a What We Owe the Future reading group—thanks for reminding me of the gap between my moral untuitions and total utilitarianism!
One reason why I am not convinded by your argument is that I am not sure that the additional lifes lived due to the unintended pregnancies are globally net-positive:
on the one hand, it does seem quite likely that their lives will be subjectively worth living (the majority of people agrees with this statement and it does not seem to me that these lives would be too different) and that they would have net-positive relationships in the future.
but on the other hand, given a level of human technology, there is some finite number of people on earth which is optimal form a total utility standpoint. And given the current state of biodiversity loss, soil erosion and global warming, it does not seem obvious that humanity is below that number[1]
as a third part, given that these are unintended pregnancies, it does seem likely that there are resource limitations which would lead to hardships if a person is born. We would need to know a lot about the life situation and social support structures of the potential parents if we wanted to estimate how significant this effect is, but it could easily be non-trivial.
edited to add and remove:
the number of 100 pregnancies averted does not correspond to 100 fewer children being born in the end. A significant part of the pregnancies would only be shifted in time. I would be surprised if the true number is larger than 10 and expect it to be lower than this. My reasoning here is that the total number of children each set of parents is going to have will hardly be reduced by 100x from access to contraception. If this number started at 10 children and is reduced to a single child, we have a reduction that corresponds to 10 fewer births per death averted. And stated like this, even the number 10 seems quite high(sorry, there were a few confusions in this argument)
This being said, the main reason why I am emotionally unconvinced by the argument you give is probably that I am on some level unable to contemplate “failing to have children” as something that is morally bad. My intuitions have somewhat cought up with the arguments that giving happy lives the opportunity to exist is a great thing, but they do not agree to the sign-flipped case for now. Probably, a part of this is that I do not trust myself (or others) to actually reason clearly on this topic and this just feels like “do not go there” emotionally.
- ↩︎
It also does not seem obvious that we are above that number. Especially when trying to include topics like wild animal suffering. At least I feel confident that human population isn’t off from the optimum by a huge factor.
This is a good point, although I would argue that the reasons why practicing religion has these advantages is unrelated to it being a case of Pascal’s wager (if we let Pascal’s wager stand for promises of infinite value in general).
This is not enough to claim that Christianity as a whole holds this position, but there certainly exist sentiments in this direction such as
Revelation 3:15--16
I know your works: you are neither cold nor hot. Would that you were either cold or hot! So, because you are lukewarm, and neither hot nor cold, I will spit you out of my mouth.
(Holy Bible, New International Version)
I really like the description, but would like to add that infinities in the “size” of the universe could also refer to time: it might be that there is an infinite future which we could possibly influence even if the size of the universe is finite. I don’t think that anyone expects this to be true with anything approaching certainty (due to entropy it seems likely that there is no possibility to sustain life/agents indefinitely), but it does not seem ruled out to me that there could be relevant loopholes like cosmic expansion ensuring that entropy can just grow indefinitely or other unusual scenarios (like the possibility to create a pocket universe by triggering a big bang)).
I did not downvote myself, but to me the comment from idea21 seems off-topic to the post itself which is a (not very strong) negative.