Hey! I am Mart, I learned about EA a few years back through LessWrong. Currently, I am pursuing a PhD in the theory of quantum technologies and learning more about doing good better in the EA Ulm local group and the EA Math and Physics professional group.
Mart_Korz
Regarding increasing potassium intake:
A few weeks ago, I heard about this as a good idea via a podcast which claimed that getting closer to the potassium recommendations would remove a large part of the problems of high sodium consumption. I switched my salt to 2⁄3 sodium and 1⁄3 potassium a few weeks ago, and until now I didn’t notice negative effects on taste[1].
Given that potassium is not that expensive, my impression was that a public policy of “everyone, potassium is x% part of table salt from now on” would lead to a large chunk of the benefits without people having to change their taste preferences a lot (by both decreasing sodium by x% and increasing potassium consumption correspondingly). This would increase the prize of salt significantly, which should have similar effects to a sodium tax (the prices would still amount to low single-digit cent costs per day even for high salt consumption).I would be curious about your thoughts on this, given that you have researched this topic a lot more deeply :)
Nonetheless I could still imagine that there are a number of foods with completely excessive amounts of salt for which other interventions would still be a good idea.
- ↩︎
I trust the nutritionist enough to be confident that this change is a good idea for me personally, but I did not look up the sources and I might well have misunderstood the effect-size of increasing potassium consumption
- ↩︎
I am just coming from a What We Owe the Future reading group—thanks for reminding me of the gap between my moral untuitions and total utilitarianism!
One reason why I am not convinded by your argument is that I am not sure that the additional lifes lived due to the unintended pregnancies are globally net-positive:
on the one hand, it does seem quite likely that their lives will be subjectively worth living (the majority of people agrees with this statement and it does not seem to me that these lives would be too different) and that they would have net-positive relationships in the future.
but on the other hand, given a level of human technology, there is some finite number of people on earth which is optimal form a total utility standpoint. And given the current state of biodiversity loss, soil erosion and global warming, it does not seem obvious that humanity is below that number[1]
as a third part, given that these are unintended pregnancies, it does seem likely that there are resource limitations which would lead to hardships if a person is born. We would need to know a lot about the life situation and social support structures of the potential parents if we wanted to estimate how significant this effect is, but it could easily be non-trivial.
edited to add and remove:
the number of 100 pregnancies averted does not correspond to 100 fewer children being born in the end. A significant part of the pregnancies would only be shifted in time. I would be surprised if the true number is larger than 10 and expect it to be lower than this. My reasoning here is that the total number of children each set of parents is going to have will hardly be reduced by 100x from access to contraception. If this number started at 10 children and is reduced to a single child, we have a reduction that corresponds to 10 fewer births per death averted. And stated like this, even the number 10 seems quite high(sorry, there were a few confusions in this argument)
This being said, the main reason why I am emotionally unconvinced by the argument you give is probably that I am on some level unable to contemplate “failing to have children” as something that is morally bad. My intuitions have somewhat cought up with the arguments that giving happy lives the opportunity to exist is a great thing, but they do not agree to the sign-flipped case for now. Probably, a part of this is that I do not trust myself (or others) to actually reason clearly on this topic and this just feels like “do not go there” emotionally.
- ↩︎
It also does not seem obvious that we are above that number. Especially when trying to include topics like wild animal suffering. At least I feel confident that human population isn’t off from the optimum by a huge factor.
Regarding the adoption of a fully vegan diet, I can understand the argument.
Could you say something on your position about a ‘vegan-friendly’ position?
In my experience, diet and taste are strongly formed by habits and expectations, and given habit-forming over a few years it should be easy to substantially reduce animal products in one’s diet (compared to the average western diet) without needing to put in much money, time or mental effort. For example one might decide to not actively seek vegan food it it comes at any relevant cost to oneself, but treat it as a serious option otherwise. I think that even when including the danger of trivial inconveniences into this consideration, it should be possible for most people to slowly accumulate some easily accessible vegan foods that they enjoy into their diet without bearing nearly the full costs of becoming vegan.
Would you agree that such an approach to vegan food would increase the altruistic cost:benefit ratio?
This is kind of a detail, but if we already assume methods which would allow for unbounded value in the absence of vacuum decay, it should not be certain that the presence of vacuum collapse creates a bound.
I would expect that it basically creates a finite (expected) lifetime to any single causally connected bubble of the universe, but this could be counteracted by sufficient rates of inflation creating and disconnecting ever more bubbles at a higher rate.This point aside, I had not realized that there actually are theoretical expectations for the Higgs ground state(s). I only learned about the toy models in my lectures and never looked up how they related to the full standard model.
Thanks!
Regarding “Pascal’s Mugging”:
I am not the author, so I might well be mistaken. But I think I can relate to the intended meaning more closely than “vaguely shady”One paragraph is
EA may not in fact be a form of Pascal’s Mugging or fanaticism, but if you take certain presentations of longtermism and X-risk seriously, the demands are sufficiently large that it certainly pattern-matches pretty well to these.
which I read as: “Pascal’s mugging” describes a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities. I think that this in itself need not be problematic (there can be huge stakes which warrant change in behaviour), but if there is social pressure involved in forcing people to accept the premise of huge moral stakes, things become problematic.
One example is the “child drowning in a pond” thought experiment. It does introduce large moral stakes (the resources you use for conveniences in everyday life could in fact be used to help people in urgent need; and in the thought experiment itself you would decide that the latter is more important) and can be used to imply significant behavioural changes (putting a large fraction of one’s resources to helping worse-off people).
If this argument is presented with strong social pressure to not voice objections, that would be a situation which fits under Pascal-mugging in my understanding.If people are used to this type of rhetorical move, they will become wary as soon as anything along the lines of “there are huge moral stakes which you are currently ignoring and you should completely change your life-goals” is mentioned to them. Assuming this, I think the worry that
[...] the demands are sufficiently large that it certainly pattern-matches pretty well to these.
makes a lot of sense.
Fascinating! I would appreciate an essay arguing for this rather strong claim
My conclusion is that if something is expressed only in writing it cannot reach the absolute majority of the population, any more than a particularly well-written verse in French can permeate the Anglosphere.
I have read weaker versions of how hard successful communication is, such as Double Illusion of Transparency and You Have About Five Words – but I think that your example is even stronger than this and an interesting addition.
Personally, I think I also belong to the group of 2nd-order-illiterate people in that I need to push my concentration a lot in order to read with sufficient care. My default way of reading is nowhere near enough and I need to read a text several times until I feel that it doesn’t contain ‘new thoughts’ even if it is well-written. I do profit a lot from podcasts and lectures, even if it is just by ‘watching a person think about the topic’ and the content is the same as in a text book.
Audio matters
Are there by any chance plans to collect the audio in a podcast feed?
Are there considerations on whether naturally occurring things would have triggered decay already if it were (sufficiently) “easy to trigger”?
My expectation would be that e.g. neutron star+black hole merging events create quite extreme conditions and might rule out some possible ways/parameter regimes of vacuum decay?
I have the slight suspicion that the author did not set a clickable link to reduce self-promotion.
I hope it is thus okay if I add it here in the comments https://www.amazon.com/dp/B0BSXHJRBQ
For anyone interested: A Forum post with more background info about the novel is I’ve written a Fantasy Novel to Promote Effective Altruism
I also found Social Movement Lessons from the Fair Trade Movement interesting to read
Update on the giving game contributions towards PlayPumps International: With some delay, we were able to find a way to donate the 2 € which were selected to go to PlayPumps by our participants.
I would :)
Hi Felix, I was involved in many of the discussions and will try to answer your questions.
you did a speed giving game? It’s about giving people the opportunity to choose for themselves, and there may also be some rebels in your target group. ;)
Yes, many of the students initially liked the idea of playpumps and chose them as preliminary favourite and some actually directly put their coin into the playpump-box[1]. After we provided them the more detailed info, most changed their decision and for one of them it just felt more proper to treat their putting their coin in the playpump-box as final, even though they would have made a different decision with their updated knowledge. For the second person, “rebel” actually kind of fits as a description at least for this one interaction with us :)
So with 1 € per person, this would make 29 players. How much time and people did you invest in the uni forum? How was the ratio with people getting a flyer / speaking with you and not participating in the giving game to those who did (had you more impressions on people than 29)?
Hmm, preparation was at maybe 30 h total (where most of this time was specifying what exactly we intended to do, reading about the experiences and guides from other groups, and also collecting and adopting the printed resources. If we were to repeat this in a few weeks or next semester, the preparation would be a lot faster) and we were two people who were present during the event itself, maybe 5 h each.
We made a lot of use of the giving game being a neat way to engage people without them feeling pressured or committing to anything in the future, so that I would say that the majority of people we reached also participated in the giving game. Unfortunately, I don’t think I can give a more more precise estimate.
Have fun with your intro meeting in six days. :)
Thanks!
- ↩︎
we were using large glasses, but ‘box’ feels like a better description of their purpose
- ↩︎
Stories of this nature are sobering to hear; thank you for posting this—each post like this gets people in the community mentally closer to seeing the base rate of success in the EA community for what it is.
Your writing is enjoyable to read as well—I would read more of it.
I agree. And now I wonder whether someone already did write more about this? And if not, maybe this could be a great project?
I found the ‘personal EA stories’ in Doing Good Better (Greg Lewis) and Strangers Drowning (well, many of these are not quite about EA, but there are many similarities) very helpful for clarifying what my expectations should or could be.
A book where, say, each chapter follows the EA path of one person with their personal successes, struggles, uncertainties and failures could span the different experiences that people can have with EA. Similarly to how many people found semicycle’s story valuable, I could imagine that such a book could be very helpful for actually internalizing that EA is very much a community project where doing the right thing often means that individuals will fail at many of their efforts.
If this book already exists, I would be very happy to know about it :)
That makes a lot of sense!
Consumers might not know or think much about the health aspects of things
This describes me quite well in many of my health choices, and unfortunately this is apparently really common.
potassium salt is 10x as expensive as normal salt
In my case, I also did not find salt that is pre-mixed at a price that makes sense to me—I bought a pharma-grade bag of KCl and mixed it with usual table salt myself[1], which resulted in a net-price that is 3x of the usual sodium salt.
So it goes back to policy, and whether governments should just regulate sodium content even in salt—we didn’t really explore this, given the higher evidence base and cheapness of salt policies.
That sounds very reasonable—I’ll be looking forward to hearing about updates in the future!
- ↩︎
with the hope that diluting by 1⁄3 will not be too much for the anti-hygroscopic components of the store-bought table salt
- ↩︎
That makes a lot of sense—in practice, there are many relevant considerations and other interventions might well be preferable in many contexts.
The expert opinion
[...] though a Chinese RCT does show positive results, and the current evidence is convincing, still more studies are needed, with the magnitude of benefit not as large as you would think.
also sounds as if potassium-enriched salt surely helps to some degree, but probably isn’t a solution by itself. And I get the impression that research in the coming years will probably improve the uncertainties here.
Apart from this, I am a bit surprised that the costs (“perhaps double the price”) would be a problem for richer countries. If I am understanding this right, this should still be obviously worth it as a health expenditure? A very simple estimate might be:
lost expected life due to high blood pressure: ~2 years (scaling the DALY burden to a single person)
expected gains from switching to potassium-enriched salt: ~1/2 year (I am guessing)
expected costs: 80 years * 2⁄3 kg/year * $10/kg = ~$550
resulting cost-effectiveness (assuming 1 year = 1 DALY): $1100 / DALY averted
Of course this isn’t comparable to GiveWell effectiveness, but it is really cheap compared to other health expenses.
I just realized that I could also just follow the links and found a part of the answer
[...] Another expert is more bearish, noting that though a Chinese RCT does show positive results, and the current evidence is convincing, still more studies are needed, with the magnitude of benefit not as large as you would think. That said, because it’s a substitution of sodium for potassium, there’s a double benefit for cardiovascular health; people don’t consume enough potassium, and potassium lowers blood pressure. And while there is a concern that increasing potassium intake across the population can create risk to people with chronic kidney disease, the evidence is that such people tend to suffer from cardiovascular disease anyway – most hypertension sufferers have higher risk of diabetes/obesity etc.
in section 4.1 1) g)
and also
Of huge interest too is potassium substitution; though evidence of that is fairly new, they think it is a game changer that can accelerate action. They are trying to figure out the name (e.g. potassium-enriched salts) from a public relations perspective. Increasing potassium reduces heart disease – it is an effective strategy. Low sodium salts in general do cost more – perhaps double the price. Then again, Himalayan salts are similarly twice as expensive, yet people still buy it – the challenge is getting the message out there, and that it is good for you (i.e. benefits of potassium); in Australia they are trying to understand the barriers to scaling up. There is research on how to get potash in a scalable way – there is a lot of potassium out there, and only a small amount is food grade (20%), with the rest (80%) used for things like fertilizer.
in section 3.3. Global Salt NGO, point 2.
I am happy to learn that people are working on this :) And it does make sense that the increased price also creates difficulties for adoption. This certainly isn’t a trivial problem. Also, I agree that the public relations perspective is important. Here in Germany, there were large health problems due to missing Iodine, which were reduced by fortifying table salt—but even though the necessity for Iodine hasn’t changed, people/products are starting to use the fortified salt less.
I really like the description, but would like to add that infinities in the “size” of the universe could also refer to time: it might be that there is an infinite future which we could possibly influence even if the size of the universe is finite. I don’t think that anyone expects this to be true with anything approaching certainty (due to entropy it seems likely that there is no possibility to sustain life/agents indefinitely), but it does not seem ruled out to me that there could be relevant loopholes like cosmic expansion ensuring that entropy can just grow indefinitely or other unusual scenarios (like the possibility to create a pocket universe by triggering a big bang)).
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
After talking to GPT about this[1], I think that my concern is actually already covered by your
If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it’s often trivial or straightforward to make consistent tables.
and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one’s values if done right, or overwhelming if done wrong.
- ↩︎
[GPTs answer includes a summary, so I’ll leave out my prompt:] I see your point now. You’re referring to a potential inconsistency in the triangle inequality context for relative values. Let’s use a concrete example to explore this. Suppose the relative value of ‘chocolate’ to ‘vanilla’ is given by a 90% credibility interval from 0.8 to 1.2, and ‘banana’ to ‘vanilla’ is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, ‘banana’ should have a relative value compared to ‘chocolate’ that lies within a certain range. If, however, you then provide a wide 90% credibility interval for ‘chocolate’ to ‘banana’ (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with ‘vanilla’. In this case, it may be that you need to revisit your estimates for ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’. If you feel a lot of uncertainty about ‘chocolate’ vs. ‘banana’, perhaps you also should feel more uncertainty about these options compared to ‘vanilla’ than you initially thought. You may have overestimated your confidence in these comparisons. Alternatively, if you’re confident in your estimates of ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’, you may want to narrow down your 90% credibility interval for ‘chocolate’ vs. ‘banana’. In any case, it’s a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it’s a sign that you may need to rethink some of your estimates.
- ↩︎
I think that a part of this perception is created by him actively framing his actions in a way compatible with a ‘press secretary model of the human mind’ (cf. “Why everyone (else) is a hypocrite”).
My impression is that he does consciously notice a mistake, and is shaken to some degree. In distancing himself from the aspects of his thinking which led to this mistake, he treats his motivations as more clear-cut than they truly were and pushes against them.
If that story were the full truth, he would not have given these answers which are basically the opposite of “locally and situationally optimal in terms of presenting himself”.
I think that we would make a mistake in thinking “well, clearly SBF was a bad person all along, so I and other EAs will not end up making structurally similar mistakes anyway” (I am not trying to imply that this is what you said/think and only add this for completeness). Regarding lessons on a community level, I think that much of the discussion on the Forum in the recent days makes a lot of sense.