Nice. I donât think itâs perfect but itâs mostly in the right ballpark.
Clara Torres Latorre đ¸
Hey, I like your progressive pledge tool. How hard would it be to include places outside the US? And more currencies?
I sometimes check this place out for cost of living comparisons around the world, itâs not perfect but it gives you some idea for at least big cities:
https://ââwww.numbeo.com/ââcost-of-living/ââ
At the same time, the good thing of 10% is that it is a way stronger Schelling point than a progressive tax, so I suppose itâs better for signaling.
EA ComÂmuÂnity MenÂtal Health & ProÂducÂtivity SurÂvey (deadÂline exÂtended)
For me itâs even more that what you say. I was thinking even for most people working on AI or bio risk, the threats usually feel quite real in a scale of decades, and they could be personally affected. The numbers may change, but I think for most people working in EA cause areas, their work is well justified without appealing to impartiality (radical empathy would be enough, and itâs less demanding) or longtermism.
Strongly agree.
For me, the discussion of impartiality (first day of intro program) and longtermism (which isnât necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that donât agree with the worldview.
Somehow I still stuck around.
But I think many of the things EA proposes donât need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.
Non american here.
I read that sentence as a rethorical like âdoing whatever thing is necessaryâ and I donât see it implying that âdefending Americaâ is necessarily even good.
However, if your read is the right one, then I find it off putting as well.
I would appreciate @Mjreard clarifying what the intent behind that was.
At least the 80k pivot to narrow focus on AI seems to back this point.
Talking to an LLM is extremely sensitive to how you frame things and your conversation history + config files.
Not clear that what worked for you would work in general.
Yes. Iâm one of those possible people. Iâm happy to have reached mutual understanding.
Okay. Thank you for your patience. I understand your point, and agree with the formal argument.
However, I still disagree. I donât know how to explain why without using some maths.
Let A be a subset of B, both sets of actions. Let G be the set of actions that we ought to do.
Existential generalization is something like
If exists x in A ^ G, exists x in B ^ G.
But this is not how I would expect readers to understand âwe ought to build more confined animal feeding operationsâ in your abstract. This reads like a general recommendation, or even an unqualified/âuniversal statement, not like an existential.
And let me add: even if the formal argument is airtight in your examples, it doesnât sound as obvious (in my intuition, it sounds obviously wrong) in your original case. This suggests that the same words mean different things in the different contexts, at least in how Iâm reading it.
Thank you for spelling out your reasoning in such a transparent way. I think our disagreement is not a matter of stylistic preferences.
I believe the following is incorrect:
If [we should build more CAFOs of the kind in which animals have above 0 welfare], then [we should build more CAFOs].
Let me rephrase your argument as
If [CAFOs > 0 is should] then [CAFOs is should].
I believe for this to hold you would need to know that [CAFOs < 0] is impossible, not just that [CAFOs > 0] is possible.
Hi Vera,
I agree on the meta point that you make here in principle. I think itâs fine to not state every premise in the abstract and the conclusion, if itâs something that itâs argued for.
I also agree that ânet positive welfare is possible in CAFOsâ is not an assumption, but a premise that is argued for (and I find the arguments sound).
However, I still think the abstract as it stands now is saying something different, namely, that [maximizing aggregate welfare] â [we should build more CAFOs].
Afaik, this would be the logical conclusion from aggregationism if we assume that [animals in CAFOs have net positive lives], not only if [it is possible that animals in CAFOs have net positive lives].
Right. Then I think this should be in the abstract. Because right now the abstract says:
we should pick the one that maximizes the net aggregate welfare of animals. I argue that, if this is right, thenâcounterintuitivelyâwe ought to build more confined animal feeding operations
and the âif this is rightâ only refers to the assumption of aggegation, not to the assumption of positive welfare in cafo
and the conclusion also doesnât say we maybe ought (if there are cases where cafo > 0)
I think your reductio is standing on the big if that animals in CAFOs have a net positive existence, and the abstract /â post skips that.
Noted. Thank you for flagging.For now, I think you can read the text by hovering your mouse above each option (not comfortable!)We will change the format asapShould be fixed now
Fair point. I no longer fully endorse point 2 so Iâve struck that out.
Hi Andrew,
I appreciate how Seth is playing nice because you are new to the EA Forum.
However, Iâm strongly downvoting for the following reasons:
âleading cause areaâ are very strong words to then not follow with any comparison to any other cause areas.
Strong claims such as âThe case is overstatedâ and âThe calculations double-count a lot of livestock carcassesâ appear to have resulted from those commentators not having read the relevant studies with sufficient care, or from failing to understand the mathematics of the calculations, or their implications.Reasonable people can disagree on how to count byproduct allocation. Calling that a failure to read carefully or understand mathematics misrepresents the disagreement.These incorrect claims undermine the case for sustainable pet diets. Similar attempts to defend the status quo were made, and are being made, by those seeking to undermine the science demonstrating the adverse effects of smoking, and the consumption of animal products, and of fossil fuels. However, these claims are incorrect, and sometimes profoundly so. This is not an argument, and does not belong here. This is just rethoric.
Your paper is published in MDPI, a known pay-to-publish venue.
I have absolutely no idea if making pets eat plant-based is a good intervention, and how it competes with other possible uses of our resources, and refuse to make any object-level claims about it here.
Strongly downvoted because, while pointing to some plausible failure mode of LLMs, this is very unnecessarily long, hard to read, and itâs not clear what is being tested or how.
So have you done anything or do you just have the high-level idea?
Hi, welcome to the EA Forum. Itâs nice to see philosophical ideas that donât come from the dominant tradition here.
Your argument rests on the premise that everyone (human) has liangzhi but large models donât.
Iâm skeptical of that, because the innate sense of right/âwrong can be culture dependent, and there are people with neurological and psychological conditions that donât have that same experience.
How does that fit into your worldview?