How so?
As I read it this post compares to donations to Givewell, not to donations to AI safety research.
How so?
As I read it this post compares to donations to Givewell, not to donations to AI safety research.
I think there is a huge difference between:
Being hired by an EA (TM) org
Doing something counterfactually impactful
If you were hired by an EA org as paid staff, you only get credit for the % that you are better than the next possible hire.
On the other hand, if you have a normal job and donate, all the donations are counterfactual.
Similarly, if you do unpaid work the bar is much lower, and would be something akin to âare the coordination costs worth it?â.
BTW, Doing earn to give is still valuable(if you can donate over $50,000 a year).
This is still an extremely high bar.
Hi,
I care instrumentally, because it may impact negative on the welfare of other people.
And yes, I agree that this is totally compatible with rejecting egalitarianism and prioritarianism, but itâs not so obvious.
I was trying to illustrate why I think many people endorse some sort of egalitarianism and have thoughts like âinequality badâ, which are easy to confuse with âinequality intrinsically badâ.
I have another intuition for egalitarianism: the distribution of power.
Most resources in our world can be traded for influence/âpower, such as money, time and materials.
Therefore, in the real scenarios that guide our intuitions, inequality is associated with concentration of power.
To put it in a charicature example: I donât care if TechnoBro 3000 celebrates his birthday in the asteroid belt with his 10^30 gold plated robot friends, but I do care if he can buy the elections of Democratistan.
This is not a rebuttal of the narrow definition of egalitarianism, but is close enough to work as an intuition pump if we are not being very theoretical.
Maybe because P(doom) ranges from 10% to 99% excludes many people that state a lower P(doom) or refuse to state a number.
Maybe because Will MacAskill sits at 10â20%, calling himself âoptimistic todayâ â but notes this is among the lowest estimates in serious circles implies people that state a lower number are not serious.
Those were my reasons to think of downvoting. In the end I didnât do it because at the time the post was already in the negatives.
There is a huge selection bias coming into play here, where people that appear in AI safety podcasts or use the expression P(doom) have self-selected for higher numbers than people that donât, and this is not addressed in the post.
Questions:
How does your estimate of $800 per DALY compare to other more established interventions such as insecticide treated bednets and vitamin A supplementation?
Do you have any cost effectiveness numbers for screening?
Oh, I understand.
Unfortunately my German is not that good and Iâm worried that an AI translation would cause some of the problems that I said earlier.
Yes, I meant in terms of style, but also in the sense that AI adds filler and obfuscates how ideas relate to each other.
Can you link the paper here?
I think you raise great points.
I also think the AI is too much in this one, and you could cut it a bunch and it would improve a lot.
Random tangent: tobacco advertising also legal in Andorra
I had to open the preprint to understand the post.
Your analogy to ferromagnetism/âsuperconductivity is superfluous, itâs just comparing a sigmoid to another sigmoid for aesthetic purposes.
And your claims about MI_epistemic being a measure of distance to safe AGI need to be toned down to be taken seriously.
I like your point
âEven when funding is abundant, poorly designed systems can still produce scarcity.â
This is a real concern, even more if EA funding grows faster than the infrastructure to spend the money wisely.
But.
I think you used too much chatGPT to the point where itâs not an editorial tool and it compromised the intelligibility of the post.
Afaiu, you have 2 main points in your post:
Kidney deaths as a case study on how money is not the only bottleneck.
CTA: please support our petition.
Both reasonable and interesting. But hard to parse from the AI aesthetic fillers.
And also, the Manifund post explicitly asks for how to /â what systems we need to navigate having more money. I think you are preaching to the choir when you say that systems can be more important than sheer amounts of money.
I have mixed feelings on this post:
On the one hand, the case for compensating donors is compelling and seems well supported.
However, the AI style of the prose makes the arguments sound weaker, because we are developing antibodies to this kind of text after having been exposed to AI slop in other parts.
Also, I think âeffective altruism is preparing for a world with far more moneyâ is a non sequitur. There are problems in the world that we know how to solve with money, that doesnât mean the prevalent opinion is that money is the only constraint. People talk frequently about talent and coordination as bottlenecks.
Hi, welcome to the EA Forum.
There are some very interesting takes about marketing in EA here:
https://ââforum.effectivealtruism.org/ââtopics/ââmarketing?sortedBy=new
My input is:
In general, there is no EA canon guidelines telling you what you should do, there is lots of debate and different points of view.
Iâm telling you this because when I first found EA I had very complicated questions and was hoping to EA to provide simple answers for them, not sure if you resonate with this but sharing it for if it applies to you.
I would say I have a tendency to go with the crowd, yes, so voting in the same direction that is already there.
Which is the contrary as minding the current voting status as you suggest.
I think this (the first one) is a failure mode.
I would say doing the opposite would be a problem, like upvoting something partly because it has positive karma so âthis must be valuableâ.
Iâm not actively doing this nor endorsing it, I just caught myself having this reflex.
Just noticed that I tend to up/âdownvote and agree/âdisagree vote more or less depending on what the current vote count is at.
Standard herding bias at work.
Hoping that saying it out loud will make it weaker, and maybe other people can relate.
What I donât like about this is that the mecanism would be unintuitive for many people. Also the vibes are off.
I think the devil is in the details. The principles are fine but what really matters is the operationalization. Hard to tell without more information about the program.