Thank you for this! I’m not sure if this was intentional or not, but it seems worth noting that my work with Robin was funded under a grant from OpenPhil, including my salary as a research assistant and some bought off classes for him.
keller_scholl
I found myself unconvinced by a number of your factual points, though I agree with your overall conclusion for very different reasons. I’ve included three that I think are particularly key.
1.
“Traders who don’t account for their lack of understanding of things like poverty-related policies (by, say, polling poor people on their policy preferences), will lose money to traders who do.
I think this solves part of the problem, but the problem will remain as long as the futarchy markets are not perfect, and as long as bettors whose wealth is mostly independent of the futarchy markets are influential for futarchy.”
The right comparison here seems to be the stock market: obviously markets are imperfect, and some people whose wealth is mostly independent of the stock market are influential. But the overall result is that, once in a long while, you will get an extraordinary event like the recent Gamestop/AMC/etc rise, representing a tiny tiny fraction of the total stock market. This source suggests that they are, at most, on the order of 2.6% of the specific markets that include them. This for a highly unusual event. I do not think it is at all a stretch to suggest that this is mostly a non-problem, presented without numbers.
More broadly, you argue that the rich having more influence than the poor over policy relevant to addressing poverty is bad, but surely that equally implies that the rich having more influence over policy related to wealth is good? While I agree that’s a little extreme and positionality is not equivalent, I generally expect wealthier individuals to be better educated, have more spare time to devote to politics, and be more cosmopolitan. While I am sympathetic to the specific case you bring up, not including this seems like a weak point.
“Many people care about policy decisions, so I don’t think we can expect that bettors whose wealth is mostly independent of the futarchy markets (i.e. the futarchy is not their chief source of income) will have no or little influence. So while wealth may end up slightly correlated with policy assessment skills, I don’t think we can expect that correlation to be strong.”
You argue that the correlation is negative! That is the crux of your point!
2.
“Hanson does not account for the possibility that the wealth landscape could change drastically in the next 10 years (in the near future, there could conceivably be individuals who are orders of magnitude richer than anyone is today).”
I don’t...think that’s particularly plausible? Elon Musk is currently “worth” about 200 billion dollars (standard caveats about why that’s an overestimate aside), and multiple orders of magnitude would imply something closer to two trillion dollars. You don’t cite any source to defend the likelihood of this claim, so I am not sure how to disagree with it.
3.
“It seems possible that futarchy might make us more efficiently pursue whatever metrics most people today genuinely think are good, but which ignore many or almost all moral patients that currently exist or that will exist in future.”
Lots of things are possible: is there any reason to expect this problem to be worse under futarchy than under democracy?
- 20 Nov 2021 1:07 UTC; 12 points) 's comment on We’re Rethink Priorities. Ask us anything! by (
Should we have received a confirmation that our application was successfully received?
I came here to say this: you have a relatively unique work position relative to most EAs, and are likely to be unusually good at identifying opportunities in countries Wave is located in.
The paper doesn’t explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.
”For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.” Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don’t seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.
There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
”Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated”
“The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible”
”regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
While they don’t explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to “any and all” technological progress, but the amount of space left is very small. I don’t think they’re taking an opinion on the value of better fishhooks.
Two points, but I want to start with praise. You noticed something important and provided a very useful writeup. I agree that this is an important issue to take seriously.
While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making
I don’t think that this is an accurate representation of how policymakers operate, either for elected officials or bureaucrats. My view comes from a gestalt of years of talking with congressional aides, bureaucrats in and around DC, and working at a think tank that does policy research. Simply put, there are so many people trying to make their point in any rich democracy that being “available” is largely equivalent to being ignored.
There are exceptions, particularly academics who publish extensively on a topic and gain publicity for it, but most people who don’t actively attempt to participate in governance simply won’t. Nobody has enough spare time, and nobody has enough spare energy, to actively seek out points of view and ideas reliably.
More importantly, I think that marginal expert influence mostly crowds out other expert influence, and does not crowd out populist impulses. Here I am more speculative, but my sense is that elected officials get a sense of what the expert/academic view is, as one input in a decision making process that also includes stakeholders, public opinion (based on polling, voting, and focus groups), and party attitudes (activists, other elected officials, aligned media, etc). Hence an EA org that attempts to change views mostly displaces others occupying a similar social / epistemic / political role, not any sense of public opinion.
On the bureaucracy side, expert input, lawmaker input, and stakeholder input are typically the primary influences when considering policy change. Occasionally public pressure will be able to notice something, but the federal registry is very boring, and as the punctuated equilibrium model of politics suggests, most of the time the public isn’t paying attention. And bureaucrats usually don’t have extra time and energy to go out and find people whose work might be relevant, but they don’t have anyone actively presenting. Add that most exciting claims are false, so decisionmakers would really have to read through entire literatures to be confident in a claim, and experts ceding influence goes primarily not to populist impulses but existing stakeholders.
I think that most of this is good analysis: I am not convinced by all of it, but it is universally well-grounded and useful. However, the point about Communicating Risk, in my view, misunderstands the point of the original post, and the spirit in which the discussion was happening at the time. It was not framed with the goal of “what should we, a group that includes a handful of policymakers among a large number, be aiming to convince with”. Rather, I saw it as a personally relevant tool that I used to validate advice to friends and loved ones about when they should personally get out of town.
Evaluating the cost in effective hours of life made a comparison they and I could work with: how many hours of my life would I pay to avoid relocating for a month and paying for an AirBnB? I recognize that it’s unusual to discuss GCRs this way, and I would never do it if I were writing in a RAND publication (I would use the preferred technostrategic language), but it was appropriate and useful in this context.
or its funny to write like that if you feel like it. charles raises a fair point that social reactions to a post are far in the future, but they can be many more than the value of the time you invested. that probably makes more sense for sposts than comments though
The casual assumption that people make that obviously the only reason Caroline could have become CEO was because she was sleeping with SBF is annoying when I see it on Twitter or some toxic subreddit. Here I expect better. Plenty of people at FTX and Alameda were equally young and equally inexperienced. The CTO (a similarly important role at a tech company) of FTX, Gary Wang, was 29. Sam Trabucco, the previous Alameda co-CEO, seems to be about the same. I have seen no reason to think that Caroline was particularly unusual in her age or experience relative to others at FTX and Alameda.
I think it’s bad to confidently assert, without real evidence, that a woman slept her way to the top of a company. Do you think it’s fine?
Thank you for responding. I read “Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison.” I have seen a number of people pass around https://www.coindesk.com/business/2022/11/10/bankman-frieds-cabal-of-roommates-in-the-bahamas-ran-his-crypto-empire-and-dated-other-employees-have-lots-of-questions/. I have seen a number of assertions that Caroline received the job because of a sexual/romantic relationship with SBF. I haven’t seen anyone assert any other “upsides” that make sense in specific relation to Caroline Ellison. Would you mind clarifying what upsides you were referring to if not the CEO position?
[2022-11-13: Edit to include more of the context of the quote]
I think a practical intervention here would be outlining how much governance should be in place at a variety of different scales. “We employ 200 people directly and direct hundreds of millions of dollars annually” should obviously have much more governance-structure than two people self-funding a project. A claim like “by the time your group has ten members and expects to grow, one of them, who is not in a leadership role themselves, should be a designated contact person for concerns, and a second replacement person as socially and professionally distant from the first as practical should be designated by the time your group hits 30 people.” I expect explicit growth models of governance to be much more useful than broad prescriptions for decision-makers, and to make explicit the actual disagreements that people have.
Firstly, I want to flag that this prediction is in strong disagreement with market predictions: the rate on a 20-year treasury is 3.85% as I write this, suggesting that investors do not expect a dramatic increase in inflation. This is in one of the largest, most liquid, and most attended-to markets on the planet, the only competition I am aware of being other US Government bonds.
Secondly, the weighted average maturity of US government debt is around five years, to give a concrete value for thinking about how long the US government can have much higher inflation before markets are able to fully react. That’s a moderate amount of time, but if you say that the US government is willing to accept multiple years of 15% inflation (an extremely bold claim), you could still only get a temporary 50% reduction in the debt without fixing the underlying entitlement issues.
Which is why it is very strange that this post assumes as a hard constraint that the US government will fulfill its entitlement obligations. I’m not sure why that is assumed. Faced with the option set “inflation” and “cut Medicare and Social Security”, the government might easily choose Medicare and Social Security. Yes, there have been promises, but they are not very credible. Maybe the inflation target gets set to 3% or 4%, numbers that are still very small, but cuts to the commitments seem as or more plausible as spending expands.
Once you drop that assumed constraint, the option set of the government expands to a wide variety of more acceptable solutions.
Finally, “Inflation is going to be terrifyingly high any day now: buy gold/crypto/my special security” has been a recurrent promise of financial snake oil salesmen for decades. Always be careful when you see people claiming it, particularly if they’re also selling something. Debt fears have a similar pedigree: we might be told to be terrified of 130% now, but I remember back when it was 90%, which turned out to be an Excel error.
They might be right this time, but you should look for a lot more than a single analysis without theoretical justification, which relies heavily on datapoints following legendarily expensive wars. In the period since the 1950s, attitudes towards government defaults have shifted. Monarchies act differently from independent central banks.
Trying to work through what would be the unique needs of EAs.
Tax planning while anticipating large charitable donations
Maximum growth portfolios for the relatively risk-tolerant
How to invest to have more resources in some worlds that EAs think either resources are more useful in, or that they think are more likely than the market thinks.
I think many people are interested in financial planning, out of a mix of frugality and personal interest. But it isn’t clear to me that personalized financial advice is the way to address these unique needs, as opposed to a 1:many medium such as Youtube or blog posts, and I am generally skeptical of autarchy as a policy goal.
Could you elaborate on what you see as the advantages of this approach?
He presented as a committed EA (without judging whether or not he was honest that it was a lie), he was and is prominent, and excluding him would be scrubbing history.
Edit: there are many reasonable frameworks for inclusion, but if we’re including philosophers I’ve never heard of, we should include the five most famous EAs (and SBF is undoubtedly in that list).
The fact that there exists an optimal population size for improving the future does not solve population ethics, because population ethics influences what “improving the future” means.
If, say, you are an average utilitarian, then a very small population, experiencing an extremely high standard of living and in no danger of losing it, is a good outcome. A total utilitarian may disagree, and think that there should be much more emphasis on expanding and creating/ensuring more good lives. The optimal population size today and next year could easily shift depending on which future you’re aiming for.
So you haven’t solved population ethics in the indefinite future (which still matters), and that influences it today (where most philosophers would agree it’s less relevant).This is not a solution, and I hope I’ve explained why.
Directly funding advocacy against particular relationship styles is something that we take seriously as a possible cause area: the numbers don’t currently seem to check out compared to alternatives, but a strong stance against child marriage seems like a very reasonable position for EA to take.
“community gatherings” is an incredibly vague category that stretches from “socializing over a meal at an EAG” to “dinner at someone’s house that they invited their friends, all of whom are EAs, to”. I don’t think it’s useful to try to identify events that way, and saying that people can’t have the latter because those events are not for helping others effectively is clearly too far. Personally, I think EAs are pretty good about not branding informal social events as EA Events TM, but that distinction in branding doesn’t necessarily mean that much to anyone.
Using the links you provide, 50% of cash incentives comes from Strategic Performance Goals in three categories (product & strategy, customers & stakeholders, culture & organizational leadership), and of one of those categories diversity and inclusion(D&I) is one of three parts listed, so at a rough guess 5% of annual cash incentives is tied to D&I. Cash incentives at Microsoft for the executives analyzed are about a fifth of total compensation, so about 1% of executive compensation is tied to D&I.
I think that having a headline “base 50% of executive compensation”, when the actual fraction seems to be 1%, is actively deceptive, and think that this question should be rewritten.
I would hope that, if EA orgs gave bonuses to leadership for success in diversity and inclusion, it would be more than 1% of total pay.
At Intel, 7% of total compensation (50% the cash incentive is “operational performance”, and cash incentive is about a seventh of total pay for the CEO) which is adjusted by D&I, but how much adjustment there is is not made clear. Given that operational performance goals include many other targets, I would be surprised if Intel was substantially different from Microsoft here.