I would say doing the opposite would be a problem, like upvoting something partly because it has positive karma so âthis must be valuableâ.
Iâm not actively doing this nor endorsing it, I just caught myself having this reflex.
I would say doing the opposite would be a problem, like upvoting something partly because it has positive karma so âthis must be valuableâ.
Iâm not actively doing this nor endorsing it, I just caught myself having this reflex.
Just noticed that I tend to up/âdownvote and agree/âdisagree vote more or less depending on what the current vote count is at.
Standard herding bias at work.
Hoping that saying it out loud will make it weaker, and maybe other people can relate.
What I donât like about this is that the mecanism would be unintuitive for many people. Also the vibes are off.
Mutual funds underperform in an environment where arbitrage exists and prices are at least close to efficient.
Indices donât select the âbest performingâ companies, they usually select the âbiggestâ companies. Here the analogy to the charity world breaks.
I agree that we donât need to (and usually donât) play those zero-sum games. The problem is that those zero-sum games are the mechanism for price discovery, and we donât have market price signals in the charity world.
I agree with your point about diversification reducing risk. This is true for empirical uncertainty and for value uncertainty sometimes. If you have a convex utility function, reducing risk has positive expected value, if not, then no.
This reminds me of Simplex Architecture, which seems well established in the literature.
And projecting onto a convex body is alright I guess but doesnât need to be the best depending on the application.
I donât see how this could work.
Investing in an index benefits from prices being good proxies for expected returns, because bringing information to the market is rewarded.
In a liquid market, buying pushes prices up, and selling pushes them down, so if something is mispriced it can be arbitraged away for a profit.
In charity, this is not happening. If research shows that charity A is 10x effective charity B (even with error bars), people donât switch until the prices (aka impact per unit funding) equalize, so the price signal that is useful for index investing is not there.
Hi, welcome to the EA Forum. Itâs nice to see philosophical ideas that donât come from the dominant tradition here.
Your argument rests on the premise that everyone (human) has liangzhi but large models donât.
Iâm skeptical of that, because the innate sense of right/âwrong can be culture dependent, and there are people with neurological and psychological conditions that donât have that same experience.
How does that fit into your worldview?
Nice. I donât think itâs perfect but itâs mostly in the right ballpark.
Hey, I like your progressive pledge tool. How hard would it be to include places outside the US? And more currencies?
I sometimes check this place out for cost of living comparisons around the world, itâs not perfect but it gives you some idea for at least big cities:
https://ââwww.numbeo.com/ââcost-of-living/ââ
At the same time, the good thing of 10% is that it is a way stronger Schelling point than a progressive tax, so I suppose itâs better for signaling.
For me itâs even more that what you say. I was thinking even for most people working on AI or bio risk, the threats usually feel quite real in a scale of decades, and they could be personally affected. The numbers may change, but I think for most people working in EA cause areas, their work is well justified without appealing to impartiality (radical empathy would be enough, and itâs less demanding) or longtermism.
Strongly agree.
For me, the discussion of impartiality (first day of intro program) and longtermism (which isnât necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that donât agree with the worldview.
Somehow I still stuck around.
But I think many of the things EA proposes donât need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.
Non american here.
I read that sentence as a rethorical like âdoing whatever thing is necessaryâ and I donât see it implying that âdefending Americaâ is necessarily even good.
However, if your read is the right one, then I find it off putting as well.
I would appreciate @Mjreard clarifying what the intent behind that was.
At least the 80k pivot to narrow focus on AI seems to back this point.
Talking to an LLM is extremely sensitive to how you frame things and your conversation history + config files.
Not clear that what worked for you would work in general.
Yes. Iâm one of those possible people. Iâm happy to have reached mutual understanding.
Okay. Thank you for your patience. I understand your point, and agree with the formal argument.
However, I still disagree. I donât know how to explain why without using some maths.
Let A be a subset of B, both sets of actions. Let G be the set of actions that we ought to do.
Existential generalization is something like
If exists x in A ^ G, exists x in B ^ G.
But this is not how I would expect readers to understand âwe ought to build more confined animal feeding operationsâ in your abstract. This reads like a general recommendation, or even an unqualified/âuniversal statement, not like an existential.
And let me add: even if the formal argument is airtight in your examples, it doesnât sound as obvious (in my intuition, it sounds obviously wrong) in your original case. This suggests that the same words mean different things in the different contexts, at least in how Iâm reading it.
Thank you for spelling out your reasoning in such a transparent way. I think our disagreement is not a matter of stylistic preferences.
I believe the following is incorrect:
If [we should build more CAFOs of the kind in which animals have above 0 welfare], then [we should build more CAFOs].
Let me rephrase your argument as
If [CAFOs > 0 is should] then [CAFOs is should].
I believe for this to hold you would need to know that [CAFOs < 0] is impossible, not just that [CAFOs > 0] is possible.
I would say I have a tendency to go with the crowd, yes, so voting in the same direction that is already there.
Which is the contrary as minding the current voting status as you suggest.
I think this (the first one) is a failure mode.