First of all, thanks to whoever is posting these transcripts. I almost definitely would never have watched the video!
One is the conceptual argument that states are the only legitimate political authorities that we have in this world, so they’re the only ones who should be doing this governance thing. …Now all of those things are true.
I think this is considerably more controversial than you assume. While it has been a few years since I studied political philosophy, my understanding is that philosophers have largely given up on the classical problem of political authority—justifying why governments a unique right to coerce people, and why people have an obligation to obey specifically because a government said so. All the attempted justifications are ultimately rather unsatisfying. It seems much more plausible that governments are justified if/when they pass good laws that protect people’s rights and improve welfare—i.e. the morality of laws justifies the government, rather than the government justifying the morality of the laws. But this is obviously rather contingent, and doesn’t suggest that states are in any way the only legitimate source of political authority.For more discussion of this, I recommend Michael Huemer’s excellent The Problem of Political Authority. There’s also a Standford Encyclopedia of Philosophy article.
The authority of a company, for example, plausibly comes from something like their market power and the influence on public opinion. And you can argue about how legitimate that authority is
Here I think you are understanding the potential legitimacy of the influence of a private company. Their justification comes not from market power, but from people freely choosing to buy their products, and the expertise they demonstrate in effectively meeting this demand. To give a mundane example, a major shipping company would have justification in providing major input into international port standardization rules by virtue of their expertise in shipping; expertise which had been implicitly endorsed by everyone who chose to hire them for shipping services.
Neat feature, thanks for adding.
Thanks for writing this up, I found it very interesting, and it seems like a great project.
Quick question: did you mean to write “biology students” here below, or is there a tighter link between biosecurity and maths than I expected?
getting the general direction to focus on within their study discipline (e.g. biosecurity for maths students)
(2) does complicate things, and while I favor expanding abortion rights, I’m not sure I’d think of them as a facet of the “expanding circle” in the same way as I do the expansion of civil rights for certain groups.
This seems like a quite backwards way of describing the situation. The abortion case is very similar to canonical expanding circle cases, like the end of slavery, prohibition of spousal rape, or animal rights:
In each case one group (slave owners, husbands, abortionists, omnivores) were systematically acting in a way that advantaged themselves (forcing slaves to production of cotton, raping wives for sex, killing fetuses to avoid pregnancy, killing animals for meat) at high cost (brutality and loss of freedom, sexual abuse, death, torture and death) of another group (black people, wives, fetuses, animals).
In each case the dominant group justified their conduct partly by arguing that the other group didn’t really count as people, perhaps because they lacked certain mental capacities.
In each case the dominant group justified their conduct partly by arguing that the other group had reduced or no ability to feel pain.
In each case the dominant group justified their conduct partly by arguing that they had an inherent right to act in this way, even if it hurt the other group.
In each case the dominant group argued that it was fine for others to avoid taking this action, but it was wrong for such conscientious objectors to prohibit them.
In each case the victimised group lacked formal representation in government.
In each case the scale of the issue was very large, and one could reasonably believe ending it was the most important issue in the world.
The abortion case is very similar to the standard expanding circle cases, and contrary to many western people’s views. The fact that we have not seen a similar expanding moral circle for unborn children is either a big problem with the expanding circle theory (if we take it as a positive description of human values) or with our current legal and social system (if we take the theory as normative prescription for who we *should* care about).
A persistent worry about solar geoengineering research concerns moral hazard: the worry that attention to plan B will reduce commitment to plan A. Having solar geoengineering as a backup will decrease commitment to reducing carbon emissions, which almost all researchers agree to be the top priority.
I’m not really sure why this would be a problem, though I read the sections in your paper—perhaps I just didn’t understand properly. Moral hazard occurs when one group (Agent) pays another (Insurer) to cover the damages of some future event that Agent is partly responsible for. Because of this insurance, Agent has less incentive to avoid/mitigate the event. Insurer now has more incentive, but if it is cheaper for Agent to mitigate it than Insurer, total mitigation will go down (or total $ expenditure on mitigation will have to go up). This is inefficient, but due to imperfect contracting and monitoring hard to avoid.
But in the geoengineering case Agent and Insurer are the same—they’re the researchers/governments. This doesn’t seem so much like moral hazard as simply the substitution effect, in the same way that solar and geothermal energy are (imperfect) substitutes. Given the optionality inherent in research, it seems you need some strong irrationality story to say there will be a net-negative expected substitution effect.
I agree the weaponisation risks make sense as a reason not to do it, but they seem separate from the moral hazard idea.
Like, “Hey I want you to be able to cover factory farming as a beat,” seems fine, but “Hey I want you to report on how factory farming is evil and bad,” you know, then you’re asking for sponsored content, maybe without clarity to the readers about what is getting paid for. So you’d want it to be something where you want the beat to exist rather than you want a particular angle on coverage.
There is a significant disanalogy between factory farming and AI safety: any journalist hired to cover factory farming is almost certainly going to be against it, whereas there are plenty of people interested in covering AI who are not well aligned on the safety issue.
I have one idea, but I think this strategy may be a bad idea so am loath to share it.
Good news: I have no successfully guarded against this infohazard by forgetting what I was talking about.
There are many different things that can cause us to change what we value. Some seem like change-processes that our current values would endorse:
I thought a lot about the issue, decided that two of my values were in conflict, and chose to prioritize one over the other.
Some seem like random processes we should resist:
I got hit in the head, and the damage caused me to change my personality and values.
Some seem actively adversarial:
Years of propaganda wore me down and caused me to love Dear Leader.
I subconsciously realized that opinion X was high status, and found it expedient to adopt this as well.
In general I think people’s opinions on the issue depend on how common they think these different cases are. I am generally quite pessimistic here; I think the first case is quite rare, and most cases that appear to be of this form are really examples of the third or fourth case. This makes me pessimistic about the long-term future, and I am interested in what we can do to reduce the influence of the last three cases.
Thanks, I thought you (or your friend) had some interesting points.
With regard the ’80k is a central planner’, I think it’s important to bear in mind that economists don’t object to planning per se. It is good that individual firms perform their own planning; what matters is that:
Firms have the right incentives.
The price mechanism provides signals between firms, that also aid intra-firm decision making via shadow pricing.
Planning occurs at the right scale—firms are under optimization pressure to be neither too small nor too large.
All of which are of course largely absent from the charity sector.
I think 80k is not subject to this critique insomuchas they direct a relatively small fraction of total resources. They’re more similar to a single firm, which has a view on an under-addressed market niche, than a socialist planner trying to solve for everything in general equilibrium. Firms often persue plans without direct price signals (e.g. developing a new product for which no market or price currently exists), and I would analogize 80k to this in some regards.
Where I do think you could make this criticism would be with regard the ‘planning’ of the EA movement. To the extent you think that they have over-emphased applying to EA groups (or at least failing to communicate adequately their nuance on the issue) this looks like a classic case of central planners massively over-producing one good and under-producing another, with no price mechanism to equilbriate supply and demand.
I don’t know much about it, but I did skim through the National Incidence Study on Childhood Abuse and Neglect. The thing that stood out to me the most was the massive difference in abuse rates between different family structures:
Married biological parents: < 3 per 1000
Single parent with partner: > 55 per 1000
Obviously we can’t say this is all causal—in general all good properties are correlated, so it’s likely there are shared genetic etc. causes.
My impression is that government prosecutors have a lot of discretion, so if you look too sympathetic they would simply turn a blind eye rather than suffer the negative media attention.
Thanks, this was very interesting. Quick question—is there meant to be an answer to this question?
Question: Were there any differences in zebra affinity between Americans and Britons?
It’s quite easy to research the cost of creating a rice farm, or a power plant, as well as get a tight bounded probability distribution for the expected price you can sell your rice or electricity at after making the initial investment. These markets are very mature and there’s unlikely to be wild swings or unexpected innovations that significantly change the market.
This doesn’t affect your overall article much, but it’s worth noting that commodity prices can be very volatile. Looking up the generic rice contract on Bloomberg for example, and picking the more extreme years but the same month (to avoid seasonality):
1998 April: 10.2
2002 April: 3.6
2004 April: 11.3
2005 April 7.2
2008 April: 23.8
2010 April: 12.6
2013 April: 15.8
2015 April: 10.0
You do have the ability to lock in the current implied profitability using futures, but in general commodity markets seem to be more volatile than non-commodity markets.
I think one paper shows that there were almost 40 near misses, and I think that was put up by the Future of Life Institute, so some people can look up that paper, and I think that in general it seems that experts agree some of the biggest risks from nuclear would be accidental use, rather than deliberate and malicious use between countries.
Possibly you are thinking of the Global Catastrophic Risks Institute, and Baum et al.’s A Model for the Probability of Nuclear War ?
Thanks for highlighting this, I thought it was interesting. It does seem that, if you thought getting Vox to write about AI was good, it would be good to have an offsetting right-wing spokesman on the issue.
One related point would be that we can try to avoid excessively associating AI risk with left wing causes; discriminationis the obvious one. The alternative would be to try to come up with right-wing causes to associate it with as well; I have one idea, but I think this strategy may be a bad idea so am loath to share it.
This was very interesting. Retrospectives on projects that didn’t work can be extremely helpful to others, but I imagine can also been tough to write, so thanks very much!
It takes a long time to craft a response to posts like these. Even if there are clear problems with the post, given the sensitive topic you have to spend a lot of time on nuance, checking citations, and getting the tone right. That is a very high bar, one that I don’t think is reasonable to expect everyone to pass. In contrast, people who agree seem to get a pass for silently upvoting.
While I appreciate your saying you don’t intend to ban topics, I think there is considerable risk that this sort of policy becomes a form of de facto censorship. In the same way that we should be wary of Isolated Demands for Rigour, so too we should also be wary of Isolated Demands for Sensitivity.
Take for example the first item on your list—lets call it A).
Whether it is or has been right or necessary that women have less influence over intellectual debate and less economic and political power
I agree that this is not a great topic for an EA discussion. I haven’t seen any arguments about the cost-effectiveness of a cause area that rely on whether A) is true or false. It seems unlikely that specifically feminist or anti-feminist causes would be the best things to work on, even if you thought A) was very true or false. If such a topic was very distracting, I can even see it making sense to essentially ban discussion of it, as LessWrong used to do in practice with regard Politics.
My concern is that a rule/recommendation against discussing such a topic might in practice be applied very unequally. For example, I think that someone who says
As you know, women have long suffered from discrimination, resulting a lack of political power, and their contributions being overlooked. This is unjust, and the effects are still felt today.
would not be chastised for doing so, or feel that they had violated the rule/suggestion.
However, my guess is that someone who said
As you know, the degree of discrimination against women has been greatly exaggerated, and in many areas, like conscription or homicide risk, they actually enjoy major advantages over men.
might be criticized for doing so, and might even agree (if only privately) that they had in some sense violated this rule/guideline with regard topic A).
If this is the case, then this policy is de facto a silencing not of topics, but of opinions, which I think is much harder to justify.
As a list of verboten opinions, this list also has the undesirable attribute of being very partisan. Looking down the list, it seems that in almost every case the discouraged/forbidden opinion is, in contemporary US political parlance, the (more) Right Wing opinion, and the assumed ‘default’ ‘acceptable’ one is the (more) Left Wing opinion. In addition, my impression (though I am less sure here) is that it is also biased against opinions disproportionately held by older people.
And yet these are two groups that are dramatically under-represented in the EA movement! (source) Certainly it seems that, on a numerical basis, conservatives are more under-represented than some of the protected groups mentioned in this article. This sort of list seems likely to make older and more conservative people feel less welcome, not more. Various viewpoints they might object to have been enshrined, and other topics, whose discussion conservatives find disasteful but is nonetheless not uncommon in the EA community, are not contraindicated.
For a generally well-received article on how to partially address this, you might enjoy Ozy’s piece here.
Here is a recent study on the topic that I think is very relevant:
Gender, Race, and Entrepreneurship: A Randomized Field Experiment on Venture Capitalists and Angels (Gornall and Strebulaev)
We sent out 80,000 pitch emails introducing promising but fictitious start-ups to 28,000 venture capitalists and business angels. Each email was sent by a fictitious entrepreneur with a randomly selected gender (male or female) and race (Asian or White). Female entrepreneurs received an 8% higher rate of interested replies than male entrepreneurs pitching identical projects. Asian entrepreneurs received a 6% higher rate than White entrepreneurs. Our results are not consistent with discrimination against females or Asians at the initial contact stage of the investment process.
However, it does seem pretty applicable to EA. The EA community is in many ways similar to the VC community:
Similar geographies: the Bay Area, London, New York etc.
Similar education backgrounds.
Both involve evaluating speculative projects with a lot of uncertainty.
Similarly to the studies discussed above, this finds that people are biased against white men.
(I have some qualms about this type of study, because they involve wasting people’s time without their consent, but this doesn’t affect the conclusions.)
Great post. I’m sure writing this must have been tough, so thanks very much for sharing this.
Great post; I had been thinking about writing something very similar. In many ways I think you have actually understated the potential of the idea. Additionally I think it addresses some of the concerns Owen raised last time.
The final prize evaluations could be quite costly to produce.
I actually think the final evaluations might be cheaper than the status quo. At the moment OpenPhil (or whoever) has to do two things:
1) Judge how good an outcome is.
2) Judge how likely different outcomes are.
With this plan, 2) has been (partially) outsourced to the market, leaving them with just 1).
If Impact Prizes took off, I could imagine some actors drawing into the ecosystem who only motivated by making profits.
This is not a bug, this is a feature! There is a very large pool of people willing to predict arbitrary outcomes in return for money, that we have thus far only very indirectly been tapping into. In general bringing in more traders improves the efficiency of a market. Even if you add noisy traders, their presence improves the incentives for ‘smart money’ to participate. I think it’s unlikely we’d reach the scale required for actual hedge funds to get involved, but I do think it’s plausible we could get a lot of hedge fund guys participating in their spare time.
In terms of legal status, one option I’ve been thinking about would be copying PredictIt. If we have to pay taxes every time a certificate is transferred, the transaction costs will be prohibitive. I am quite worried it will be hard to make this work within US law unfortunately, which is not very friendly to this sort of experimentation. At the same time, given the SEC’s attitude towards non-compliant security issuance, I would not want to operate outside it!
Quick other thoughts
One issue with the idea is it is hard for OpenPhil to add more promised funding later, because the initial investment will already have been committed at some fixed level. e.g. If OpenPhil initially promise $10m, and then later bump it to $20m, projects that have already sold their tokens cannot expand to take advantage of this increase, so it is effectively pure windfall with no incentive effect. A possible solution would be cohorts; we promise $10m in 2022 for projects started in 2019, and then later add another $12m, paid in 2023, for 2020 projects.