There is quite a bit of recent controversy about pitbulls, that seems like the right place to start.
Minding Our Way by Nate Soares comes close, although I don’t think he addresses the “what if there actually exist moral obligations?” question, instead assuming mostly non-moral-realism)
Not sure what he says (haven’t got the interest to search through a whole series of posts for the relevant ones, sorry) but my point assuming antirealism (or subjectivism) seems to have been generally neglected by philosophy both inside and outside the academia: just because the impartial good isn’t everything doesn’t mean that it is rational to generically promote other people’s pursuits of their own respective partial goods. The whole reason humans created impartial morality in the first place is that we realized that it works better than for us to each pursue partialist goals.
So, regardless of most moral points of view, the shared standards and norms around how-much-to-sacrifice must be justified on consequentialist grounds.
I should emphasize that antirealism != agent-relative morality, I just happen to think that there is a correlation in plausibility here.
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.
Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.
But for most people, there doesn’t seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn’t mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn’t necessarily mean stressing out about it, and we shouldn’t set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven’t personally experienced confirmation of that point of view.
If I think I shouldn’t have the mocha, I just… don’t get the mocha. Sometimes I do get the mocha, but then I don’t feel anxiety about it, I know I just acted compulsively or whatever and I then think “oh gee I screwed up” and get on with my life.
The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer’s “about a third” principle, is a first step in this direction.
There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.
You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it’s not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn’t stop us from being ready and willing to use it where appropriate.
Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don’t have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that’s not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don’t think in literal cost-effectiveness terms, but global benefits are still my general goal. It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
It’s important to remember that having parochial attitudes towards some things in your own life doesn’t necessarily justify attempts to spread analogous attitudes among other people.
Interesting. Y&G said that they checked for a curvillinear relationship and the results “do not suggest substantively different conclusions,” which I understand to mean that there isn’t good evidence for a Kuznets curve.
I did not know that India’s average consumption was so low, perhaps their marginal increase in consumption is not much either.
Looking at Table 3. Am I reading this right: the relationship for low income countries is +0.0188kg (annually) per $1 annual income? That’s 18.8kg from $1000 which is about an order of magnitude greater than the Y&G results.
There is a critical omission in all of this line of scholarship—the authors never seem to stop to think about the long-run, systemic value of growing EA itself. They seem to think of it as a bare-bones redirection of small amounts of funds, without taking our potential seriously. It seems prima facie obvious that growing the EA movement has a higher value (person-for-person) than growing any other social or political movement, and the consequences of achieving an EA majority in any polity would be tremendous. As someone who identifies with EA first and other movements second (the framework which the author seems to assume), I think that EA is more philosophically correct than others, so its adherents will aim towards better goals. And in practice, EA appears to be more flexible, rational and productive than other movements. So donations and activism in support of EA movement growth are superior to efforts in favor of other things, assuming equal tractability.
If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all ‘effective altruists’, than my objections are pretty much moot.
I don’t see how objections about methodology would be mooted merely because the audience knows that the methodology is disputed.
What is the benefit of including them?
That they are predictors of how good or bad a political term will be.
Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates?
If they are weighted appropriately, they will only shuffle them when it is good to do so.
There is one objective reality and our goal should be to get our understanding as close to it as possible.
Then why do you want me to flip coins or leave things to the reader’s judgement...?
1.) Robust to new evidence
I recently removed that statement from the document because I decided it’s an inaccurate characterization.
2.) Robust to different points of view
This also contradicts the wish for a model that is objective. “Robust to different points of view” really means making no assumptions on controversial topics, and incomplete.
Generally speaking, I don’t see justification for your point of view (that issues are generally not predictive of the value of a term… this contradicts how nearly everyone thinks about politics), nor do you seem to have a clear conception of an alternative methodology. You want me to include EA issues, yet at the same time you do not want issues in general to have any weight. Can you propose a complete framework?
Apologies for not being clear enough, I am suggesting the first, and part of the second, i.e. removing issues not related to EA. It is fine to discuss the best available evidence on “not well studied topics”, but I don’t think it’s advisable to give “official EA position” on those.
I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.
Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it’s not super important to include them at all, but at the same time they will not change many of the outcomes.
The model easily allows for narrowing down to core issues, so probably I (or anyone else who wants to work on it) will start with by making a narrow report, and then fill it out fully if time allows. Then they can be marketed differently and people can choose which one to look at.
In addition, my first point is questioning the idea of ranking politicians based on the views they claim or seem to hold because of how unpredictable the actual actions are regardless of what they say.
So it seems like you disagree with the weight I give to issues relative to qualifications, you think it should be less than 1.8. Much less?
I believe EA should stick to spreading the message that each individual can make the world a better place through altruism based on reason and evidence, and that we should trust no politician or anybody else to do it for us.
I think of it more as making a bet than as truly trusting them. Reports/materials certainly won’t hide the possible flaws and uncertainty in the analysis.
I’m not sure exactly, my perception is that (1) often they don’t currently but the new growth is more likely to be factory farming, (2) traditional farming isn’t clearly better. Farming in the West is probably covered by more welfare regulations than farming in poor countries.
I’m unclear, are you suggesting that we remove “qualifications” (like the candidate’s experience, character, etc), or that we remove issues that are not well studied and connected to EA (like American healthcare, tax structure, etc), or both?
I downvoted this because I think it’s valuable for the EA community to have a public, credible norm against violating people’s legally recognized rights. Destroying property does this, so we wouldn’t be a very trustworthy community if we endorsed such behavior.
On the other hand, the last sentence of your comment makes me feel that you’re equating my not agreeing with you with my not understanding probability. (I’m talking about my own feelings here, irrespective of what you intended to say.)
Well, OK. But in my last sentence, I wasn’t talking about the use of information terminology to refer to probabilities. I’m saying I don’t think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, but getting into anything else just seems fruitless when your initial priors are so far out there (and when you also tell people that you don’t expect to be persuaded anyway).
It’s worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate’s health has nothing to do with the fact that you’re an EA; they’d be just as good listening to any other trusted pundit. I’m not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.
I think it would be hard to keep things tight around traditional EA issues because then we would get attacked for ignoring some people’s pet causes. They’ll say that EA is ignoring this or that problem and make a stink out of it.
There are some things that we could easily exclude (like health) but then it would just be a bit less accurate while still having enough breadth to include stances on common controversial topics. The value of this system over other pundits is that it’s all-encompassing in a more formal way, and of course more accurate. The weighting of issues on the basis of total welfare is very different from how other people do it.
Still I see what you mean, I will keep this as a broad report but when it’s done I can easily cut out a separate version that just narrows things down to main EA topics. Also, I can raise the minimum weight for issue inclusion above 0.01, to keep the model simpler and more focused on big EA stuff (while not really changing the outcomes).
2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree—they disagree for empirical reasons. If you stick to something where it’s mostly a values question, people might trust your judgements more.
Epistemic modesty justifies convergence of opinions.
If there is empirical disagreement that cannot be solved with due diligence looking into the issue, then it’s irrational for people to hold all-things-considered opinions to one side or the other.
If it’s not clear which policy is better than another, we can say “there is not enough evidence to make a judgement”, and leave it unscored.
So yes there is a point where I sort of say “you are scientifically wrong, this is the rational position,” but only in cases where there is the clear logic and validated expert opinion to back it up to the point of agreement among good people. People already do this with many issues (consider climate change for instance, where the scientific consensus is frequently treated as an objective fact by liberal institutions and outlets, despite empirical disagreement among many conservatives).
Obviously right now the opinions and arguments are somewhat rough, but they will be more complete in later versions.
The present and past are the only tools we have to think about the future, so I expect the “pre-driven car” model to make more accurate predictions.
They’ll be systematically biased predictions, because AGI will be much smarter than the systems we have now. And it’s dubious that AI should be the only reference class here (as opposed to human brains vis-a-vis animal brains, most notably).
I have not yet found any argument in favour of AI Risk being real that remained convincing after the above translation.
If so, then you won’t find any argument in favor of human risk being real after you translate “free will” to “acting on the basis of social influences and deterministic neurobiology”, and then you will realize that there is nothing to worry about when it comes to terrorism, crime, greed or other problems. (Which is absurd.)
Also, I don’t see how the arguments in favor of AI risk rely on language like this; are you referring to the real writing that explains the issue (e.g. papers from MIRI, or Bostrom’s book) or are you just referring to simple things that people say on forums?
It seems absurd to assign AI-risk less than 0.0000000000000000000000000000001% probability because that would be a lot of zeros.
The reality is actually the reverse: people are prone to assert arbitrarily low probabilities because it’s easy, but justifying a model with such a low probability is not. See: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
And, after reading this, you are likely to still underestimate the probability of AI risk, because you’ve anchored yourself at 0.00000000000000000000000000000000000001% and won’t update sufficiently upwards.
Anchoring points everywhere depending on context and it’s infeasible to guess its effect in a general sense.
I’m not sure about your blog post because you are talking about “bits” which nominally means information, not probability, and it confuses me. If you really mean that there is, say, a 1 − 2^(-30) probability of extinction from some cause other than x-risk then your guesses are indescribably unrealistic. Here again, it’s easy to arbitrarily assert “2^(-30)” even if you don’t grasp and justify what that really means.
Yes, and I will pay attention to everything anyway unless this thread gets super unwieldy. I am mainly suggesting that people focus on the things that will make the biggest difference.
When I know someone closely, I value their life and experiences, intrinsically. I don’t feel as if I wish they had never been born, nor do I wish to kill them.
And it’s straightforward to presume that, with people who I don’t know closely, I would feel similarly about them if I knew them well.
So if I want to treat people consistently with my basic inclinations, I should not be NU towards them.
It’s hard to generalize across times and cultures, but ephebophiles and hebephiles seem to be treated much more harshly these days. Often they are placed in the category of pedophiles (who also might have been more tolerated in the past, I’m not sure).
I think historical immigrants to the US had to deal with more frequent racism at the social level. Historical immigration policy might have been guided by economic need rather than moral values.
It seems like a fair assumption that prisoners are broadly treated better today (in the West) than they used to be. Sexual abuse and solitary confinement were probably more common back in the day.