Thank you. It’s hard for me (and I think for many people) to remember to say what feels obvious to them.
keller_scholl
keller_scholl’s Quick takes
For my part, I’m not sure who disagrees with Owen’s current position, or what that would change going forward. Ritually chanting “You Did Wrong” around him doesn’t seem useful to me. I don’t know what I want him to do differently now. Some of that is that it’s harder to talk about an individual that I don’t know than the policies a team should take.
What he did was unacceptable. The existence of repeated incidents of this sort is more concerning.
Right now, I have not been able to discern any plan from the Community Health team more extensive than “Julia screwed up and will try not to do that again.”
I’m not saying that they acted less badly. I have more opinions on what they should do differently going forward. I suspect that that is fairly common.
This is a good post. Thank you for sharing. I disagree somewhat with your framework, because I think it is extremely important to differentiate factors that increase the likelihood of armed conflicts between nuclear powers and factors that increase the risk of nuclear escalation given a conventional conflict. I think that you’ve over-focused on the latter, and that drivers of the former are fairly important
For example, your analysis of UAVs and UUVs doesn’t consider a risk I find highly salient: mutual misunderstanding of escalatory strength. That is, if the US shoots down an uncrewed Chinese intelligence balloon over US airspace, the escalatory action was China sending the balloon at all. If the US had shot down a crewed Chinese stealth fighter, reactions would have been very different. This holds even if the capabilities of the fighter and the balloon were identical.
Now, if the sole impact of UAVs is that there’s a new step on the escalation ladder, that would probably be slightly beneficial. But if there’s a step on the escalation ladder and Chinese and American political leadership disagree on where that step is, the potential for a situation to turn into a shooting conflict that takes lives increases substantially.
A similar point about escalation uncertainty can be raised by cybersecurity capabilities: militaries across the globe have made some steps towards defining how they think about cyberattacks. I believe that the most explicit statement on the topic comes from the French, but there are also advantages to strategic ambiguity, and genuine uncertainty about how publics in both authoritarian and democratic states would react to a cyberattack that, say, impaired the power grid.
Cybersecurity has the additional problem of, in the view of some experts, having incentives towards more provocative action, with a bias towards attackers under some circumstances.
As always, I do not speak for my employer or the US government.
Thank you for the response. I discarded my point by point response, because I think I have a more elegant explanation: I parse your argument as saying that because there is and should be a high degree of uncertainty around the net harm/benefit of polyamory, we should avoid taking a position on it.
I think that is a fine position to have. I don’t think it’s particularly relevant, because my parse of Keerthana Gopalakrishnan’s perspective is that she thinks polyamory is harmful and there is strong evidence for this.[1] And certainly critics of polyamory can point to a long anthropological tradition and a great number of studies, and advocates can note that those studies are for a wildly different context from modern international elites.
If polyamory is maybe slightly bad, then I think it’s reasonable for EA social consensus, let alone institutionalized EA, to favor letting people make their own choices. We don’t demand that every member eat an optimally healthy diet or practice gratitude journaling, in part because there are substantial differences between people and in part because people get to live their own lives.
If polyamory is very harmful and the evidence for this is very clear, and those harms can’t be pattern-matched exactly to an American gay man in 1970[2], then I would face a much more difficult set of questions. For some people polyamory seems to be intrinsic, and the bar for asking them to suppress that should be very high. I think that EAs should have a social consensus against relationships that we think are very likely to be harmful.
For example, many bright young people think that visa-motivated marriages are an obviously great idea. Having seen that obviously great idea crash and burn multiple times, with relatively few successes, and relatively causal explanations observed in the failures, I am now against it. I would advise a friend against it if they asked my opinion, and for a good friend perhaps even if they didn’t. And many EAs have EA friends.
- ^
That point is not made explicitly, but it is hard to parse her as having any other stance based on her writing and the tone of the Time piece.
- ^
I have recently seen someone try to claim that harms of polyamory are different because it’s not just social stigma: more people are affected because of multiple partners and STI risks are higher. Some people clearly don’t know queer history. Much of the “harms” of polyamory that critics raise in this piece seem to be exactly analogous to straight men being deeply offended at being propositioned by queer men.
- ^
I think there are different interpretations that people can take of what it implies. One reading is that the pledge specifies how the pledger responds in the moment, with the person in pain. Another reading would talk about how they reacted through the entire resulting community process. I parsed it as the former, but what you’re describing seems to be closer to the second.
Directly funding advocacy against particular relationship styles is something that we take seriously as a possible cause area: the numbers don’t currently seem to check out compared to alternatives, but a strong stance against child marriage seems like a very reasonable position for EA to take.
“community gatherings” is an incredibly vague category that stretches from “socializing over a meal at an EAG” to “dinner at someone’s house that they invited their friends, all of whom are EAs, to”. I don’t think it’s useful to try to identify events that way, and saying that people can’t have the latter because those events are not for helping others effectively is clearly too far. Personally, I think EAs are pretty good about not branding informal social events as EA Events TM, but that distinction in branding doesn’t necessarily mean that much to anyone.
The fact that there exists an optimal population size for improving the future does not solve population ethics, because population ethics influences what “improving the future” means.
If, say, you are an average utilitarian, then a very small population, experiencing an extremely high standard of living and in no danger of losing it, is a good outcome. A total utilitarian may disagree, and think that there should be much more emphasis on expanding and creating/ensuring more good lives. The optimal population size today and next year could easily shift depending on which future you’re aiming for.
So you haven’t solved population ethics in the indefinite future (which still matters), and that influences it today (where most philosophers would agree it’s less relevant).This is not a solution, and I hope I’ve explained why.
He presented as a committed EA (without judging whether or not he was honest that it was a lie), he was and is prominent, and excluding him would be scrubbing history.
Edit: there are many reasonable frameworks for inclusion, but if we’re including philosophers I’ve never heard of, we should include the five most famous EAs (and SBF is undoubtedly in that list).
Trying to work through what would be the unique needs of EAs.
Tax planning while anticipating large charitable donations
Maximum growth portfolios for the relatively risk-tolerant
How to invest to have more resources in some worlds that EAs think either resources are more useful in, or that they think are more likely than the market thinks.
I think many people are interested in financial planning, out of a mix of frugality and personal interest. But it isn’t clear to me that personalized financial advice is the way to address these unique needs, as opposed to a 1:many medium such as Youtube or blog posts, and I am generally skeptical of autarchy as a policy goal.
Could you elaborate on what you see as the advantages of this approach?
Firstly, I want to flag that this prediction is in strong disagreement with market predictions: the rate on a 20-year treasury is 3.85% as I write this, suggesting that investors do not expect a dramatic increase in inflation. This is in one of the largest, most liquid, and most attended-to markets on the planet, the only competition I am aware of being other US Government bonds.
Secondly, the weighted average maturity of US government debt is around five years, to give a concrete value for thinking about how long the US government can have much higher inflation before markets are able to fully react. That’s a moderate amount of time, but if you say that the US government is willing to accept multiple years of 15% inflation (an extremely bold claim), you could still only get a temporary 50% reduction in the debt without fixing the underlying entitlement issues.
Which is why it is very strange that this post assumes as a hard constraint that the US government will fulfill its entitlement obligations. I’m not sure why that is assumed. Faced with the option set “inflation” and “cut Medicare and Social Security”, the government might easily choose Medicare and Social Security. Yes, there have been promises, but they are not very credible. Maybe the inflation target gets set to 3% or 4%, numbers that are still very small, but cuts to the commitments seem as or more plausible as spending expands.
Once you drop that assumed constraint, the option set of the government expands to a wide variety of more acceptable solutions.
Finally, “Inflation is going to be terrifyingly high any day now: buy gold/crypto/my special security” has been a recurrent promise of financial snake oil salesmen for decades. Always be careful when you see people claiming it, particularly if they’re also selling something. Debt fears have a similar pedigree: we might be told to be terrified of 130% now, but I remember back when it was 90%, which turned out to be an Excel error.
They might be right this time, but you should look for a lot more than a single analysis without theoretical justification, which relies heavily on datapoints following legendarily expensive wars. In the period since the 1950s, attitudes towards government defaults have shifted. Monarchies act differently from independent central banks.
I think a practical intervention here would be outlining how much governance should be in place at a variety of different scales. “We employ 200 people directly and direct hundreds of millions of dollars annually” should obviously have much more governance-structure than two people self-funding a project. A claim like “by the time your group has ten members and expects to grow, one of them, who is not in a leadership role themselves, should be a designated contact person for concerns, and a second replacement person as socially and professionally distant from the first as practical should be designated by the time your group hits 30 people.” I expect explicit growth models of governance to be much more useful than broad prescriptions for decision-makers, and to make explicit the actual disagreements that people have.
Thank you for responding. I read “Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison.” I have seen a number of people pass around https://www.coindesk.com/business/2022/11/10/bankman-frieds-cabal-of-roommates-in-the-bahamas-ran-his-crypto-empire-and-dated-other-employees-have-lots-of-questions/. I have seen a number of assertions that Caroline received the job because of a sexual/romantic relationship with SBF. I haven’t seen anyone assert any other “upsides” that make sense in specific relation to Caroline Ellison. Would you mind clarifying what upsides you were referring to if not the CEO position?
[2022-11-13: Edit to include more of the context of the quote]
I think it’s bad to confidently assert, without real evidence, that a woman slept her way to the top of a company. Do you think it’s fine?
The casual assumption that people make that obviously the only reason Caroline could have become CEO was because she was sleeping with SBF is annoying when I see it on Twitter or some toxic subreddit. Here I expect better. Plenty of people at FTX and Alameda were equally young and equally inexperienced. The CTO (a similarly important role at a tech company) of FTX, Gary Wang, was 29. Sam Trabucco, the previous Alameda co-CEO, seems to be about the same. I have seen no reason to think that Caroline was particularly unusual in her age or experience relative to others at FTX and Alameda.
or its funny to write like that if you feel like it. charles raises a fair point that social reactions to a post are far in the future, but they can be many more than the value of the time you invested. that probably makes more sense for sposts than comments though
Agreed on the importance of who their potential donor pool is. If I found out that an org had run the event the author describes for highly committed EAs I would be aghast. But by the standards of what is done to solicit ultra high net worth donors who move millions annually and who are not currently interested in EA, it seems entirely reasonable.
I think that most of this is good analysis: I am not convinced by all of it, but it is universally well-grounded and useful. However, the point about Communicating Risk, in my view, misunderstands the point of the original post, and the spirit in which the discussion was happening at the time. It was not framed with the goal of “what should we, a group that includes a handful of policymakers among a large number, be aiming to convince with”. Rather, I saw it as a personally relevant tool that I used to validate advice to friends and loved ones about when they should personally get out of town.
Evaluating the cost in effective hours of life made a comparison they and I could work with: how many hours of my life would I pay to avoid relocating for a month and paying for an AirBnB? I recognize that it’s unusual to discuss GCRs this way, and I would never do it if I were writing in a RAND publication (I would use the preferred technostrategic language), but it was appropriate and useful in this context.
Two points, but I want to start with praise. You noticed something important and provided a very useful writeup. I agree that this is an important issue to take seriously.
While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making
I don’t think that this is an accurate representation of how policymakers operate, either for elected officials or bureaucrats. My view comes from a gestalt of years of talking with congressional aides, bureaucrats in and around DC, and working at a think tank that does policy research. Simply put, there are so many people trying to make their point in any rich democracy that being “available” is largely equivalent to being ignored.
There are exceptions, particularly academics who publish extensively on a topic and gain publicity for it, but most people who don’t actively attempt to participate in governance simply won’t. Nobody has enough spare time, and nobody has enough spare energy, to actively seek out points of view and ideas reliably.
More importantly, I think that marginal expert influence mostly crowds out other expert influence, and does not crowd out populist impulses. Here I am more speculative, but my sense is that elected officials get a sense of what the expert/academic view is, as one input in a decision making process that also includes stakeholders, public opinion (based on polling, voting, and focus groups), and party attitudes (activists, other elected officials, aligned media, etc). Hence an EA org that attempts to change views mostly displaces others occupying a similar social / epistemic / political role, not any sense of public opinion.
On the bureaucracy side, expert input, lawmaker input, and stakeholder input are typically the primary influences when considering policy change. Occasionally public pressure will be able to notice something, but the federal registry is very boring, and as the punctuated equilibrium model of politics suggests, most of the time the public isn’t paying attention. And bureaucrats usually don’t have extra time and energy to go out and find people whose work might be relevant, but they don’t have anyone actively presenting. Add that most exciting claims are false, so decisionmakers would really have to read through entire literatures to be confident in a claim, and experts ceding influence goes primarily not to populist impulses but existing stakeholders.
Bad Things Are Bad: A Short List of Common Views Among EAs
No, we should not sterilize people against their will.
No, we should not murder AI researchers. Murder is generally bad. Martyrs are generally effective. Executing complicated plans is generally more difficult than you think, particularly if failure means getting arrested and massive amounts of bad publicity.
Sex and power are very complicated. If you have a power relationship, consider if you should also have a sexual one. Consider very carefully if you have an power relationship: many forms of power relationship are invisible, or at least transparent, to the person with power. Common forms of power include age, money, social connections, professional connections, and almost anything that correlates with money (race, gender, etc). Some of these will be more important than others. If you’re concerned about something, talk to a friend who’s on the other side of that from you. If you don’t have any, maybe just don’t.
And yes, also, don’t assault people.
Sometimes deregulation is harmful. “More capitalism” is not the solution to every problem.
Very few people in wild animal suffering think that we should go and deliberately destroy the biosphere today.
Racism continues to be an incredibly negative force in the world. Anti-black racism seems pretty clearly the most harmful form of racism for the minority of the world that lives outside Asia.[1]
Much of the world is inadequate and in need of fixing. That EAs have not prioritized something does not mean that it is fine: it means we’re busy.
The enumeration in the list, of certain bad things, being construed to deny or disparage other things also being bad, would be bad.
Hope that clears everything up. I expect with 90% confidence that over 90% of EAs would agree with every item on this list.
Inside, I don’t know enough to say with confidence. Could be caste discrimination, could be ongoing oppression of non-Han, could be something I’m not thinking of. I’m not making a claim about the globe as a whole because I haven’t run the numbers, and different EAs will have different values and approaches to how to weight history, cultures, etc. I just refuse to fall into the standard America/Euro-centric framework.