OK, CSS5 will address this by looking more broadly at the literature and the articles you cite, or maybe I will just focus more on the economist survey.
You seem to be trying to analyze these things with personal intuition, imagining what will happen when people get something like housing or a minimum wage. I’d say it’s better to look at reliable studies and surveys. Here are some sources:
Economists generally agree that rent control reduces housing availability and raises housing prices. http://www.igmchicago.org/surveys/rent-control
Cities’ restrictions against building new housing reduces housing availability and slows the general economy. Study 1: https://faculty.chicagobooth.edu/chang-tai.hsieh/research/growth.pdf?fbclid=IwAR0Cf2QNNlEDI96gn76oc47AJ5BSVH2qbyOHhEl6GHANQVI2HyCADt5Ke-8 Study 2: https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.32.1.3
Additionally, you seem to be looking at this as a matter of consumer choices, recommending rent to own instead of buying or regular renting. I’d say it’s usually better to trust that consumers are making smart choices on their own, and instead worry about government policies to empower them.
One-sided questions make more sense when there is an established position to question. If it had become a common point of view that we simply feel good about ETG, then it would make sense to seek out opposing views. But, to my knowledge, no one has made such a case for ETG before. Asking for people’s feelings about it is mainly a step into new territory. To the extent that gut feelings about ETG are implicitly circulated within EA, they seem to be generally negative, which means that specifically asking people for gut feelings in favor of ETG would make more sense.
A worldview gives specific reasons to support or oppose something, that’s different from feelings.
Knowing how ETGers actually feel about their work is different from generically asking how people feel about it. The former of course is useful evidence.
If it’s crankery then it shouldn’t get a fairly neutral report.
If the Burke et al. article that you’re largely basing the 26% number on is accurate (which I strongly doubt)
What is wrong with it?
it seems like trying to cause economic activity to move to more moderate climates might be an extremely effective intervention.
Economic activity already goes to wherever it will be the most profitable. I don’t see why we would expect companies to predictably err.
And, even if so, I don’t share the intuition that it might be extremely effective.
He will be in it.
Some of this reasoning about social impacts, nonzero probability of severe collapse, dynamic effects, etc, applies equally well to many other issues. Your comment on S-risks—you could tell a similar story for just about any cause area. And everyone has their own opinion on what kind of biases EAs have. So a basic GDP-loss estimate is not a very bad way to approach things for comparative purposes. You are right though that the expected costs are a lot more than 2% or something tiny like that.
In Candidate Scoring System I gave rough weights to political issues on the basis of long run impact from ideal US federal policy. I expected the global GDP costs of future GHG emissions at 26% by 2090, and used that to give climate change a weight of 2.9. Compare to animal farming (15.6), existential risks from emerging technologies (15), immigration (9), zoning policy (1.5), and nuclear security (1.2).
Whether climate adaptation could also be potentially high value for EAs
For the same game theoretic reasons that make climate change a problem in the first place, I would expect polities to put too much emphasis on adaptation as opposed to prevention.
I can only give a secondhand perception—from people (including economists) offhandedly discussing in blogs, social media, etc, MMT appears to be crankery that sometimes violates economic knowledge, but sometimes is so vaguely defined that it’s “not even wrong”.
If this is true, seeing it published under “Future Perfect” is worrying .
Also, if your criterion for choosing an intervention is how frequently it still looks good under different models and priors, as people seem to be suggesting in lieu of EV maximization, you will still get similar curses—they’ll just apply to the number of models/priors, rather than the number in the EV estimate.
it seems like the crux is often the question of how easy it is to choose good priors
Before anything like a crux can be identified, complainants need to identify what a “good prior” even means, or what strategies are better than others. Until then, they’re not even wrong—it’s not even possible to say what disagreement exists. To airily talk about “good priors” or “bad priors”, being “easy” or “hard” to identify, is just empty phrasing and suggests confusion about rationality and probability.
The proposed solution of using priors just pushes the problem to selecting good priors.
The problem of the optimizer’s curse is that the EV estimates of high-EV-options are predictably over-optimistic in proportion with how unreliable the estimates are. That problem doesn’t exist anymore.
The fact that you don’t have guaranteed accurate information doesn’t mean the optimizer’s curse still exists.
I don’t think there’s any complete solution to the optimizer’s curse
Well there is, just spend too much time worrying about model uncertainty and other people’s priors and too little time worrying about expected value estimation. Then you’re solving the optimizer’s curse too much, so that your charity selections will be less accurate and predictably biased in favor of low EV, high reliability options. So it’s a bad idea, but you’ve solved the optimizer’s curse.
If you’re presented with multiple priors, and they all seem similarly reasonable to you, but depending on which ones you choose, different actions will be favoured, how would you choose how to act?
Maximize the expected outcome over the distribution of possibilities.
If one action is preferred with almost all of the priors (perhaps rare in practice), isn’t that a reason (perhaps insufficient) to prefer it?
What do you mean by “the priors”? Other people’s priors? Well if they’re other people’s priors and I don’t have reason to update my beliefs based on their priors, then it’s trivially true that this doesn’t give me a reason to prefer the action. But you seem to think that other people’s priors will be “reasonable”, so obviously I should update based on their priors, in which case of course this is true—but only in a banal, trivial sense that has nothing to do with the optimizer’s curse.
To me, using this could be an improvement over just using priors
Hm? You’re just suggesting updating one’s prior by looking at other people’s priors. Assuming that other people’s priors might be rational, this is banal—of course we should be reasonable, epistemically modest, etc. But this has nothing to do with the optimizer’s curse in particular, it’s equally true either way.
I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesn’t apply for estimating the impact of randomly selected actions. So far it still seems like there is none—aside from the basic idea given by Muelhauser.
just using priors never fully solved the problem in practice in the first place
Is the problem the lack of guaranteed knowledge about charity impacts, or is the problem the optimizer’s curse? You seem to (incorrectly) think that chipping away at the former necessarily means chipping away at the latter.
What I am saying is that they don’t address the optimizer’s curse just by including them, and I suspect they won’t help at all on their own in some cases.
You seem to be using “people all agree” as a stand-in for “the optimizer’s curse has been addressed”. I don’t get this. Addressing the optimizer’s curse has been mathematically demonstrated. Different people can disagree about the specific inputs, so people will disagree, but that doesn’t mean they haven’t addressed the optimizer’s curse.
Maybe checking sensitivity to priors and further promoting interventions whose value depends less on them (among some set of “reasonable” priors) would help. You could see this as a special case of Chris’s suggestion to “Entertain multiple models”.
Perhaps you could even use an explicit model to combine the estimates or posteriors from multiple models into a single one in a way that either penalizes sensitivity to priors or gives less weight to more extreme estimates, but a simpler decision rule might be more transparent or otherwise preferable.
I think combining into a single model is generally appropriate. And the sub-models need not be fully, explicitly laid out.
Suppose I’m demonstrating that poverty charity > animal charity. I don’t have to build one model assuming “1 human = 50 chickens”, another model assuming “1 human = 100 chickens”, and so on.
Instead I just set a general standard for how robust my claims are going to be, and I feel sufficiently confident saying “1 human = at least 60 chickens”, so I use that rather than my mean expectation (e.g. 90).
kk will add a link. But major edits would be a pain because the old post is in HTML.
This is a rather strange post. (a) you’re only looking for objections, (b) you’re looking for feelings rather than sound reasons or heuristics. Just an observation.
I don’t think this leaves you in a good position if your estimates and rankings are very sensitive to the choice of “reasonable” priors.
What do you mean by “a good position”?
You could try to choose some compromise between these priors, but there are multiple “reasonable” ways to compromise. You could introduce a prior on these priors, but you could run into the same problem with multiple “reasonable” choices for this new prior.
Ah, I guess we’ll have to switch to a system of epistemology which doesn’t bottom out in unproven assumptions. Hey hold on a minute, there is none.
I’m getting a little confused about what sorts of concrete conclusions we are supposed to take away from here.
I find it unlikely that veganism wasn’t influenced by existing political arguments for veganism.
I find it obvious. What political arguments for veganism even exist? That it causes climate change? Yet EAs give more attention to the suffering impacts than to the climate impacts.
I find it unlikely that a focus on institutional decision making wasn’t influenced by existing political zeitgist around the problems with democracy and capitalism.
The mere idea that “there are problems with democracy and capitalism” is relatively widespread, not unique to leftism, and therefore doesn’t detract from my point that relatively moderate positions (which frequently acknowledge problems with democracy and capitalism) have better impacts on EA than extreme ones. The leftist zeitgeist is notably different and even contradictory with what EAs have put forward, as noted above.
I find it unlikely that the global poverty focus wasn’t influenced by the existing political zeitgeist around inequality.
People have focused on poverty as a target of charity for millennia, and people who worry about inequality (as opposed to worrying about poverty) are more stubborn towards EA ideas and demands.
it also provides the backdrop from which EAs are reasoning from.
There is an opportunity cost in not having a better backdrop. Even in a backdrop of political apathy, there would not be less information and less ideas (broadly construed) in the public sphere, just different ones and presented differently.
I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world
Yes, and these things are explicitly under attack from political actors.
Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on
When EAs are not the experts, EAs pay attention to the relevant experts.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.
I have the impression that asking for a better salary it is saying that what I do is more important that what others do and that I can judge better than them what we can do we that money.
Yes, and it is more important, and you can do better—because you’re on the EA forum and they’re not.
If you’re employed by an EA organization then feel free to take a low salary.
It’s a personal feeling but it seems important to me that what we earn in a society is based on the importance of our contribution to that society, which of course is not currently the case. And we have too much well paid jobs that are really harm full to the society.
Higher paying jobs do tend to provide more value to employers and customers, that’s why they are willing to pay for the salaries. It’s true that this can be distorted because of wealth inequalities and other issues, but giving everyone the same salary wouldn’t necessarily be any more accurate—it’s not the case that everyone contributes equally to society either.
Yes the basic idea here is that you, as an Effective Altruist, have some power and knowledge to judge which use of money is more important. Your employers/customers… or a charity of your choice.
So I wonder if there has been some studies or articles about that and that may try to see what percentage of population willing to lower their salaries it would require to have a positive impact.
That’s what income taxes are. People vote for governments to impose taxes on everyone. This is easier than convincing people to give up their own money. Income taxes are usually progressive, which means the rich pay more % of their income than the poor and it reduces inequality.
Tax money usually stays in the same country rather than going to the global poor, but that’s no different from people agreeing to take a lower salary, because your customers and employees are probably going to be from your own country anyway.
Veganism is probably a good example here.
Who has complained that EA is bad because it ignored animals? EAs pursued animal issues on their own volition. Peter Singer has been the major animal rights philosopher in history. Animal interests are not even part of the general political climate.
Institutional decisionmaking might be another.
Looking at 80k Hours’ writeup on institutional decision making, I see nothing with notable relevance to people’s attacks on EA. EAs have been attacked for not wanting to overthrow capitalism, not wanting to reform international monetary/finance/trade institutions along the lines of global justice, and funding foreign aid that acts as a crutch for governments in the developing world. None of these things have a connection to better institutional decision making other than the mere fact that they pertain to the government’s structure and decisions (which is broad enough to be pretty meaningless). 80k Hours is looking at techniques on forecasting and judgment, drawing heavily upon psychology and decision theory. They are talking about things like prediction markets and forecasting that have been popular among EAs for a long time. There are no citations and no inspirations from any criticisms.
The general political climate does not deal with forecasting and prediction markets. The last time it did, prediction markets were derailed because the general political climate created opposition (the Policy Analysis Market in the Bush era).