Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
Sure, but surely we give it according to Shapley values? What if you had missed this? We should reward Jeff for that.
Where can i see the debate week diagram if I want to look back at it?
Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don’t have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone:
To the extent that we discuss this issue rarely it really ought to be worth someone’s time to write up these supposed strong arguments. To the extent that they haven’t, even after a well publicised week of discussion I will believe it more likely they don’t exist.
Yudkowsky is very happy to answer difficult questions, more so than most public figures.
The Lightcone team are generally very transparent, answering specific internal questions
If you publish a bad pieces and share them with millions of people, I don’t really feel obliged to talk or listen to other things you write until you correct the inaccurate piece. I don’t think any other community would and I think it’s a bad use of our time to extend this absurd level of charity.
People are free to tell me the wired article wasn’t inaccurate or lazy, but scanning it, it looks that way.
Here are quote I could find in 15 minutes from your first article that leave the reader with an inaccurate impression. I have not read this new article.
“Elon Musk has said that EA is close to what he believes”—has Musk acted on these supposed beliefs or is this just guilt by association?
“comparable to what it’s estimated the Saudis spent over decades to spread Islamic fundamentalism around the world”—I can find many things that cost $46 but I note that you chose a terrorist ideology
“”Insecticide-treated bed nets can prevent malaria, but they’re also great for catching fish. In 2016, The New York Times reported that overfishing with the nets was threatening fragile food supplies across Africa.” ”—my sense is that this is widely debunked. As a result of your article it was shared by Marc Adreessen. As you yourself note we should count harms as well as benefits. I could this as a harm to what I am confident is an effective way to stop malaria.
‘In a subsection of GiveWell’s analysis of the charity, you’ll find reports of armed men attacking locations where the vaccination money is kept—including one report of a bandit who killed two people and kidnapped two children while looking for the charity’s money.’ ”—I think it’s a bit absurd to imply that the norm is to count this stuff as the costs of aid. Perhaps it should be (and it’s good that givewell mentioned it) but the implication that they are unusually bad for not doing so?
I could go on.
Leif, we do not owe you our time. You had the same social credit that all critics have and a large platform. You could have come here and argued your case. I am sure people would have engaged. But for me, you have burned that credit, sharing inaccuracies to millions of people. Your piece started a news cycle about the harms of bednets based on inaccurate information. That has real harms. So I don’t care to read your piece.
I don’t know whether I am the hero in my own story—I have done many things I regret—but I do know a thing or two about dealing with those I disagree with. I would not publish a piece with this amount of errors and if I did I wouldn’t expect people to engage with me again. I do not understand why you think we would.
I hope you are well, genuinely.
I think there is something here about the kinds of people who are steady hands not necessarily having great leverage either in terms of pay or status. But realistically such a person may be very costly to replace or do a very valuable role.
In that way, a sensible organisation would increase their pay and (to the extent possible) status by reflecting not on the change of their output from year to year, but actually how difficult they are to replace, which might be weeks of hiring, months of training, months of management time and perhaps years of time passing to get back to the function working as well as it previously did.
It is tricky to think how such negotiations can take place properly, but it seems likely to me that the sort of person who is likely to be a steady hand might not be agitatng for such, but that in turn means those who would say if paid more, appreciated more, don’t see that option available to them.
I sort of think this is a reason not to have EA-endorsed politicians unless someone has really done the due diligence. This is a pretty high trust community and people expect something someone says confidently to be rubustly tested but political recommendations (and some charity ones to be fair) seem much less well researched than general discussions on policy etc.
I don’t think the community tag is warranted on this post.
I’m making my way through, but so far I guess it’s gonna be @Richard Y Chappell🔸’s arguments around ripple effects.
Animals likely won’t improve the future for consciousness, but more, healthy humans might.
I haven’t read the article fully yet though.
Argument: Nietzschean Perfectionism
@Richard Y Chappell🔸 theorises that:
maybe the best things in life—objective goods that only psychologically complex “persons” get to experience—are just more important than creature comforts (even to the point of discounting the significance of agony?). The agony-discounting implication seems implausibly extreme, but I’d give the view a minority seat at the table in my “moral parliament”
To my (Nathan’s) ears this is either a discontinuous valuation of pleasure and pain across consciousnesses or one that puts far more value at the higher end. In this way the improvement to the life of a human could be worth infinite insects or some arbitrarily large number.
I am willing to discuss (either in the comments or on a call) any of these arguments. I don’t think any of them hold much water and I doubt that in total they are enough to shift the weight of what we should do.
I am glad @Henry Howard🔸 wrote them up, but to the extent there is now a big list of arguments I don’t find compelling I am slightly more convinced.
My response to this is that we can always take medians. And to the extent that the medians multiplied by the number of animals suggest this is a very large problem, the burden is on those who disagree to push the estimates down.
There isn’t some rule which says that extremely wide confidence intervals can be ignored. If anything extremely wide confidence intervals ought to be inspected more closely because the value inside them can take a lot of different values.
I just sort of think this argumend doesn’t hold water for me.
Argument: Approximations are too approximate.
@Henry Howard🔸 argues that much of the scholarship that animal welfare estimates are based on is so wide that that it doesn’t make clear conclusions:
Unfortunately these ranges have such wide confidence intervals that, putting aside the question of whether the methodology and ranges are even valid, it doesn’t seem to get us any closer to doing the necessary cost-benefit analyses.
Do you want to do a debate on youtube? I’m looking for polite, truth-seeking participants.
Argument: The money can be spent over a long time and like will be able to be spent.
The footnote on the main question says:
In total. You can imagine this is a trust that could be spent down today , or over any time period
Likewise @Will Howard🔹 argues that this isn’t that significant an additional amount of money anyway:
“$100m in total is not a huge amount (equiv to $5-10m/yr, against a background of ~$200m). I think concern about scaling spending is a bit of a red herring and this could probably be usefully absorbed just by current intervention”
Which arguments do you find compelling in debate week?
Can I push you on this a bit?
I want to note that this is more consensus than I thought in favour of the proposition. I would have guessed the median was much nearer 50% than it is.
I note that in some sense I have lost trust that the EA community gives me a clear prioritisation of where to donate.
Some clearer statements:
I still think GiveWell does great work
I still generally respect the funding decisions of Open Philanthropy
I still think this forum has a higher standard than most place
It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones
I don’t know what the general consensus on the most impactful x-risk funding opportunities are
I don’t really know what orgs do all-considered work on this topic. I guess the LTFF?
I am more confused/inattentive and this community is covering a larger set of possible choices so it’s harder to track what consensus is