The present and past are the only tools we have to think about the future, so I expect the “pre-driven car” model to make more accurate predictions.
They’ll be systematically biased predictions, because AGI will be much smarter than the systems we have now. And it’s dubious that AI should be the only reference class here (as opposed to human brains vis-a-vis animal brains, most notably).
I have not yet found any argument in favour of AI Risk being real that remained convincing after the above translation.
If so, then you won’t find any argument in favor of human risk being real after you translate “free will” to “acting on the basis of social influences and deterministic neurobiology”, and then you will realize that there is nothing to worry about when it comes to terrorism, crime, greed or other problems. (Which is absurd.)
Also, I don’t see how the arguments in favor of AI risk rely on language like this; are you referring to the real writing that explains the issue (e.g. papers from MIRI, or Bostrom’s book) or are you just referring to simple things that people say on forums?
It seems absurd to assign AI-risk less than 0.0000000000000000000000000000001% probability because that would be a lot of zeros.
The reality is actually the reverse: people are prone to assert arbitrarily low probabilities because it’s easy, but justifying a model with such a low probability is not. See: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
And, after reading this, you are likely to still underestimate the probability of AI risk, because you’ve anchored yourself at 0.00000000000000000000000000000000000001% and won’t update sufficiently upwards.
Anchoring points everywhere depending on context and it’s infeasible to guess its effect in a general sense.
I’m not sure about your blog post because you are talking about “bits” which nominally means information, not probability, and it confuses me. If you really mean that there is, say, a 1 − 2^(-30) probability of extinction from some cause other than x-risk then your guesses are indescribably unrealistic. Here again, it’s easy to arbitrarily assert “2^(-30)” even if you don’t grasp and justify what that really means.
On the other hand, the last sentence of your comment makes me feel that you’re equating my not agreeing with you with my not understanding probability. (I’m talking about my own feelings here, irrespective of what you intended to say.)
Well, OK. But in my last sentence, I wasn’t talking about the use of information terminology to refer to probabilities. I’m saying I don’t think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, but getting into anything else just seems fruitless when your initial priors are so far out there (and when you also tell people that you don’t expect to be persuaded anyway).
My takeaway is that the EA forum’s voting is better than LessWrong’s.
Just to be clear, saving lives for several hundred thousand dollars each would still be efficient enough to justify donating most of one’s disposable income. The rhetorical force of the drowning child argument is useful for philosophy classrooms and public media where you have to prod people who are otherwise disappointingly selfish, but I don’t think many of us are going to rely on that as a rigorous basis for why we do what we do.
I would interpret your post as merely objecting that EA organizations are misrepresenting things in order to foster more aid for the otherwise-good goal of helping people in severe poverty. But the idea that we actually aren’t obligated to donate just because the cost per life saved is $100,000 instead of $5,000 is ridiculous.
I think we should be narrower on what concrete changes we discuss. You’ve mentioned “integration, “embracing and working around”… what does that really mean? Are you suggesting that we spend less money on the effective causes, and more money on the mainstream causes? That would be less effective (obviously) and I don’t see how it’s supported by your arguments here.
If you are referring to career choice (we might go into careers to shift funding around) I don’t know if the large amount of funding on ineffective causes really changes the issue. If I can choose between managing $1M of global health spending or $1M of domestic health spending, there’s no more debate to be had.
If you just mean that EAs should provide helpful statements and guidance to other efforts… this can be valuable, and we do it sometimes. First, we can provide explicit guidance, which gives people better answers but faces the problems of (i) learning about a whole new set of issues and (ii) navigating reputational risks. Some examples of this could be Founders’ Pledge report on climate change, and Candidate Scoring System. As you can see in both cases, it takes a substantial amount of effort to make respectable progress here.
However, we can also think about empowering other people to apply an EA toolkit within their own lanes. The Future Perfect media column in Vox is mostly an example of this, as they are looking at American politics with a mildly more EA point of view than is typical. I can also imagine articles along the lines of “how EA inspired me to think about X” where X is an ineffective cause area. I’m a big fan of spreading the latter kind of message.
Note: I think your argument is easy enough to communicate by merely pointing out the different quantities of funding in different sectors, and trying to model and graph everything in the beginning is unnecessary complexity.
This is why I’ve argued that for EA to make political judgements about broad partisan issues and elections, it should come together with a formal or semi-formal structure to aggregate and compare evidence from both sides. If we can’t make reliable political judgements or can’t make a meaningful political effort, then we shouldn’t pretend that it counts as effective activism. The justifications of who to vote for and how much each vote is worth have so far been methodologically lacking, as they leave many basic counterpoints (like the ones here) unanswered. In particular, the points raised here about nuclear war and democracy underscore the fact that EAs commenting on Trump have been generally uneducated, and occasionally clueless, about international relations. If we do politics, then we’ll have to do it systematically better. In the spirit of this main idea, I’ll resist the urge to comment on the object-level of this essay.
However, everybody complaining about sources needs to take a step back and remember how many people write official-sounding essays here sourced entirely with inline links to LessWrong and rationalist bloggers. Strange how nobody complained about sources until now.
There are many problems here:
There is not a clear distinction between preparations for offense and preparations for defense. The absence of this distinction is precisely what gives rise to threats and instability in cases like North Korea. The ambiguity is due to structural problems with limited information and the nature of military forces, not ideologies in the current milleu.
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it’s possible for alternative or backchannel efforts to be positive, they are far from being the “obvious” choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it’s not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don’t vanish just because you rephrase it in the language of utilitarianism and AGI.
Academia has influence on policymakers when it can help them achieve their goals, that doesn’t mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You’re also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
Talking about people or countries as rational agents with utility functions does not mean we have to pretend that they act on the basis of moral theories like utilitarianism.
In terms of satire, I’m not sure that satirising the choice to not eat animal products is the funniest topic.
Right, it’s not supposed to be funny. I hope that reading this post makes one feel a sense of revulsion at covering up moral obligations with so many levels of rationalization. The point is that we should feel equally strongly about donations to charity.
Necessity/sufficiency tests are too narrow. Aid is neither necessary nor sufficient to end poverty, but we do it anyway.
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.
Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.
But for most people, there doesn’t seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn’t mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn’t necessarily mean stressing out about it, and we shouldn’t set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven’t personally experienced confirmation of that point of view.
If I think I shouldn’t have the mocha, I just… don’t get the mocha. Sometimes I do get the mocha, but then I don’t feel anxiety about it, I know I just acted compulsively or whatever and I then think “oh gee I screwed up” and get on with my life.
The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer’s “about a third” principle, is a first step in this direction.