There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.
You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it’s not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn’t stop us from being ready and willing to use it where appropriate.
Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don’t have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that’s not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don’t think in literal cost-effectiveness terms, but global benefits are still my general goal. It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
It’s important to remember that having parochial attitudes towards some things in your own life doesn’t necessarily justify attempts to spread analogous attitudes among other people.
Interesting. Y&G said that they checked for a curvillinear relationship and the results “do not suggest substantively different conclusions,” which I understand to mean that there isn’t good evidence for a Kuznets curve.
I did not know that India’s average consumption was so low, perhaps their marginal increase in consumption is not much either.
Looking at Table 3. Am I reading this right: the relationship for low income countries is +0.0188kg (annually) per $1 annual income? That’s 18.8kg from $1000 which is about an order of magnitude greater than the Y&G results.
There is a critical omission in all of this line of scholarship—the authors never seem to stop to think about the long-run, systemic value of growing EA itself. They seem to think of it as a bare-bones redirection of small amounts of funds, without taking our potential seriously. It seems prima facie obvious that growing the EA movement has a higher value (person-for-person) than growing any other social or political movement, and the consequences of achieving an EA majority in any polity would be tremendous. As someone who identifies with EA first and other movements second (the framework which the author seems to assume), I think that EA is more philosophically correct than others, so its adherents will aim towards better goals. And in practice, EA appears to be more flexible, rational and productive than other movements. So donations and activism in support of EA movement growth are superior to efforts in favor of other things, assuming equal tractability.
If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all ‘effective altruists’, than my objections are pretty much moot.
I don’t see how objections about methodology would be mooted merely because the audience knows that the methodology is disputed.
What is the benefit of including them?
That they are predictors of how good or bad a political term will be.
Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates?
If they are weighted appropriately, they will only shuffle them when it is good to do so.
There is one objective reality and our goal should be to get our understanding as close to it as possible.
Then why do you want me to flip coins or leave things to the reader’s judgement...?
1.) Robust to new evidence
I recently removed that statement from the document because I decided it’s an inaccurate characterization.
2.) Robust to different points of view
This also contradicts the wish for a model that is objective. “Robust to different points of view” really means making no assumptions on controversial topics, and incomplete.
Generally speaking, I don’t see justification for your point of view (that issues are generally not predictive of the value of a term… this contradicts how nearly everyone thinks about politics), nor do you seem to have a clear conception of an alternative methodology. You want me to include EA issues, yet at the same time you do not want issues in general to have any weight. Can you propose a complete framework?
Apologies for not being clear enough, I am suggesting the first, and part of the second, i.e. removing issues not related to EA. It is fine to discuss the best available evidence on “not well studied topics”, but I don’t think it’s advisable to give “official EA position” on those.
I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.
Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it’s not super important to include them at all, but at the same time they will not change many of the outcomes.
The model easily allows for narrowing down to core issues, so probably I (or anyone else who wants to work on it) will start with by making a narrow report, and then fill it out fully if time allows. Then they can be marketed differently and people can choose which one to look at.
In addition, my first point is questioning the idea of ranking politicians based on the views they claim or seem to hold because of how unpredictable the actual actions are regardless of what they say.
So it seems like you disagree with the weight I give to issues relative to qualifications, you think it should be less than 1.8. Much less?
I believe EA should stick to spreading the message that each individual can make the world a better place through altruism based on reason and evidence, and that we should trust no politician or anybody else to do it for us.
I think of it more as making a bet than as truly trusting them. Reports/materials certainly won’t hide the possible flaws and uncertainty in the analysis.
I’m not sure exactly, my perception is that (1) often they don’t currently but the new growth is more likely to be factory farming, (2) traditional farming isn’t clearly better. Farming in the West is probably covered by more welfare regulations than farming in poor countries.
I’m unclear, are you suggesting that we remove “qualifications” (like the candidate’s experience, character, etc), or that we remove issues that are not well studied and connected to EA (like American healthcare, tax structure, etc), or both?
I downvoted this because I think it’s valuable for the EA community to have a public, credible norm against violating people’s legally recognized rights. Destroying property does this, so we wouldn’t be a very trustworthy community if we endorsed such behavior.
On the other hand, the last sentence of your comment makes me feel that you’re equating my not agreeing with you with my not understanding probability. (I’m talking about my own feelings here, irrespective of what you intended to say.)
Well, OK. But in my last sentence, I wasn’t talking about the use of information terminology to refer to probabilities. I’m saying I don’t think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, but getting into anything else just seems fruitless when your initial priors are so far out there (and when you also tell people that you don’t expect to be persuaded anyway).
It’s worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate’s health has nothing to do with the fact that you’re an EA; they’d be just as good listening to any other trusted pundit. I’m not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.
I think it would be hard to keep things tight around traditional EA issues because then we would get attacked for ignoring some people’s pet causes. They’ll say that EA is ignoring this or that problem and make a stink out of it.
There are some things that we could easily exclude (like health) but then it would just be a bit less accurate while still having enough breadth to include stances on common controversial topics. The value of this system over other pundits is that it’s all-encompassing in a more formal way, and of course more accurate. The weighting of issues on the basis of total welfare is very different from how other people do it.
Still I see what you mean, I will keep this as a broad report but when it’s done I can easily cut out a separate version that just narrows things down to main EA topics. Also, I can raise the minimum weight for issue inclusion above 0.01, to keep the model simpler and more focused on big EA stuff (while not really changing the outcomes).
2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree—they disagree for empirical reasons. If you stick to something where it’s mostly a values question, people might trust your judgements more.
Epistemic modesty justifies convergence of opinions.
If there is empirical disagreement that cannot be solved with due diligence looking into the issue, then it’s irrational for people to hold all-things-considered opinions to one side or the other.
If it’s not clear which policy is better than another, we can say “there is not enough evidence to make a judgement”, and leave it unscored.
So yes there is a point where I sort of say “you are scientifically wrong, this is the rational position,” but only in cases where there is the clear logic and validated expert opinion to back it up to the point of agreement among good people. People already do this with many issues (consider climate change for instance, where the scientific consensus is frequently treated as an objective fact by liberal institutions and outlets, despite empirical disagreement among many conservatives).
Obviously right now the opinions and arguments are somewhat rough, but they will be more complete in later versions.
The present and past are the only tools we have to think about the future, so I expect the “pre-driven car” model to make more accurate predictions.
They’ll be systematically biased predictions, because AGI will be much smarter than the systems we have now. And it’s dubious that AI should be the only reference class here (as opposed to human brains vis-a-vis animal brains, most notably).
I have not yet found any argument in favour of AI Risk being real that remained convincing after the above translation.
If so, then you won’t find any argument in favor of human risk being real after you translate “free will” to “acting on the basis of social influences and deterministic neurobiology”, and then you will realize that there is nothing to worry about when it comes to terrorism, crime, greed or other problems. (Which is absurd.)
Also, I don’t see how the arguments in favor of AI risk rely on language like this; are you referring to the real writing that explains the issue (e.g. papers from MIRI, or Bostrom’s book) or are you just referring to simple things that people say on forums?
It seems absurd to assign AI-risk less than 0.0000000000000000000000000000001% probability because that would be a lot of zeros.
The reality is actually the reverse: people are prone to assert arbitrarily low probabilities because it’s easy, but justifying a model with such a low probability is not. See: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
And, after reading this, you are likely to still underestimate the probability of AI risk, because you’ve anchored yourself at 0.00000000000000000000000000000000000001% and won’t update sufficiently upwards.
Anchoring points everywhere depending on context and it’s infeasible to guess its effect in a general sense.
I’m not sure about your blog post because you are talking about “bits” which nominally means information, not probability, and it confuses me. If you really mean that there is, say, a 1 − 2^(-30) probability of extinction from some cause other than x-risk then your guesses are indescribably unrealistic. Here again, it’s easy to arbitrarily assert “2^(-30)” even if you don’t grasp and justify what that really means.
Yes, and I will pay attention to everything anyway unless this thread gets super unwieldy. I am mainly suggesting that people focus on the things that will make the biggest difference.
When I know someone closely, I value their life and experiences, intrinsically. I don’t feel as if I wish they had never been born, nor do I wish to kill them.
And it’s straightforward to presume that, with people who I don’t know closely, I would feel similarly about them if I knew them well.
So if I want to treat people consistently with my basic inclinations, I should not be NU towards them.
It’s hard to generalize across times and cultures, but ephebophiles and hebephiles seem to be treated much more harshly these days. Often they are placed in the category of pedophiles (who also might have been more tolerated in the past, I’m not sure).
I think historical immigrants to the US had to deal with more frequent racism at the social level. Historical immigration policy might have been guided by economic need rather than moral values.
It seems like a fair assumption that prisoners are broadly treated better today (in the West) than they used to be. Sexual abuse and solitary confinement were probably more common back in the day.
Re: #1, the overall distribution of articles on different topics is not particularly impressive. There are other outlets (Brookings, at least) which focus more on global poverty.
I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a “strange thing to focus on” if you assume, with great confidence, utilitarianism to be true.
I think that arguing from moral theories is not really the right approach here, instead we can focus on the immediate moral issue—whether it is better to help someone merely because they or their ancestors were historically mistreated, holding welfare changes equal. There is a whole debate to be had there, which has plenty of room for eclectic arguments that don’t assume utilitarianism per se.
The idea that it’s not better is consistent with any consequentialism which looks at aggregate welfare rather than group fairness, and some species of nonconsequentialist ethics (there is typically a lot of leeway and vagueness in how these informal ethics are interpreted and applied, and academic philosophers tend to interpret them in ways that reflect their general political and cultural alignment).
I totally agree with you that “unequal racial distribution can have important secondary effects”, and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds.
Sure, but practically everything should get attention by this rationale. The real question is—how do we want to frame this stuff? What do we want to implicitly suggest to be the most important thing?
Go ahead and write one! Do some research/modeling and share your findings. I did, and you can too.
Highlight your text and then select the hyperlink icon in the pop-up bar.
Well, deaths from nuclear explosions will obviously be a small minority of the world.
Large numbers of people will survive severe fallout: it’s fairly easy to build safe shelters in most locations. Kearny’s Nuclear War Survival Skills shows how feasible it is. Governments and militaries of course know to prepare for this sort of thing. And I think fallout doesn’t become a truly global phenomenon, it is only deadly if you are downwind from the blast sites.
Here is one of the main nuclear winter studies, that uses a modern climate model. They assume the use of the entire global arsenal (including 10,000 US and 10,000 Russian weapons) to get their pessimistic 150 Tg scenario, which has a peak cooling of 7.5 degrees celsius. That will still leave large parts of the world with a relatively warm temperatures. However, the US has already gone down to 1,800 weapons in the actual strategic arsenal, with 4,000 in the general stockpile. Russia’s stockpile is 7,850 with only 1,600 in the strategic arsenal. The use of all nuclear weapons in a war is an unrealistic assumption because countries have limited delivery systems, they’ll want to still have some nuclear weapons in case they survive, and they won’t want to cripple their own country and allies with excessive global cooling. Also, the assumption that all weapons will detonate is unrealistic—missile defense systems and force-on-force strikes will destroy many nuclear weapons before they can hit their targets. So even their moderate 50 Tg scenario with 3.5 degrees celsius of cooling seems implausible, it would still require nearly 7,000 weapons detonated. It seems like we are really looking at 2-3 degrees celsius cooling from an unlimited exchange—approximately enough to cancel out past and future global warming. The temperature also recovers quite a bit in just a few years.