Director of Research at PAISRI
I guess I don’t understand why w > x > y > z implies w—y = x—y iff w—x = y—z. Sorry if this is a standard result I’ve forgotten, but at first glance it’s not totally obvious to me.
I didn’t quite follow. What’s the reasoning for claiming this?
From the definition of the four variables, the following equivalence can be deduced:w−y=x−z⟺w−x=y−z
From the definition of the four variables, the following equivalence can be deduced:
Well, I’d say we’re all pragmatists whether we acknowledge it or not due to the problem of the criterion.
Not exactly based on EA org experience, but I think one of the biggest challenges orgs face is going from small enough that everyone can sit at the same table (people sometimes call these 2 pizza teams, because you can feed everyone with two pizzas; in practice the number is somewhere between 8 and 12) to medium (less than 150 people, aka the point at which you can personally know of everyone) to large.
EA orgs are most likely to face the first transition, small to medium. The big thing to know is that you’ll have to find ways to take what happened and worked organically with a small team and transition it into processes since you’ll no longer be able to easily achieve org-wide alignment automatically. This typically means the introduction of planning processes that make sure everyone is on the same page about what you’re trying to do and making sure everyone’s work is connected to the org’s mission.
Exactly what will work is context dependent, but perhaps the first thing to get right is that you can’t expect things to keep working the way they did when you were smaller. There is a tipping point where adding one more person breaks your ability to function the way you used to, and at that point you need to be ready to introduce more formal processes (these don’t necessarily have to be extremely formal, just more formal than everyone sitting around the table talking and figuring things out without any process).
Dislike the idea. Feels like this will change the character of the site in a way that’s negative. It’s a bit hard to say way, but part of the vibe of this place is that it’s about ideas not about people, and this will take it away from that direction, and I think have more an idea vibe than a personal brand vibe is good for what this forum is for. There’s plenty of other places people can have more highly personally identifiable or warmer experience of connecting with others.
If we did this I feel like it would be trying to optimize for something that’s not, in my view, the primary purpose of the forum, and thus would make this site worse at being the EA Forum than without this feature.
I’ve been asking for this feature on LW. If we’re not going to get it there, at least we can get it here!
Given the inclusion of space opera like Dune, I recommend including Vinge’s work like A Fire Upon the Deep and A Deepness In the Sky. These deal with the long term consequences of intelligence explosion, albeit one in a world with slightly different physics than ours (or so it seems given our limited information; Vinge is careful to construct it in a way such that I think we can’t be certain today our universe is not like the one he depicts in the books).
I’d also include Niven’s Ringworld. Not obvious this is longtermist at first, but deep into the book that changes (not much more I can say without spoilers if you’re hoping to read it).
So I remain unconvinced that there’s a specific longtermist case for democracy, but I think there is a longtermist case for some kind of context in which longtermist work can happen.
What I have in mind is I’m not sure democracy or liberal democracy is necessary to work on longtermist cause areas, but liberal democracy is creating an environment in which this work can get done. So there’s an interesting question, then: what are the feature of liberal democracy that enable longtermist work?
I ask this because I’m not sure that, for example, democracy is necessary to work on improving the longterm future, however it’s also clear that something about liberal democracy has allowed people to start doing work towards bettering the longterm future, so it must have some features we care about for that purpose. Maybe it is the case that electing the government is the key feature that matters, but I don’t see an obvious causal chain there between the two, which makes me wonder what the features are that do matter that we’d want to ensure are preserved if we want people to be able to work on making the longterm future better, even if it means having a government that is not what we’d consider to be democratic.
Maybe another way to put my comment is that this post feels like it’s taking for granted that liberal democracy is good for longtermism so we want to figure out what it is about liberal democracy that makes it good, but I’d say it slightly differently: longtermism has been fostered within liberal democracies, so this means there must be something about liberal democracies that matters, but this doesn’t imply that longtermism requires liberal democracy, so we should cast a wider net and look at features of specific liberal democracies where longtermist work is flourishing without presupposing that it’s somehow connected to the system of government (for example, maybe it’s just that liberal democracies are rich and have lots of extra money to spend on “hobby” interests like longtermism, and any sufficiently rich society, no matter the government, would be able to foster longtermism; I don’t know, but that’s the kind of question that seems to me worth exploring).
As best I can tell you don’t seem to address the main reasons most organizations don’t choose to outsource:
additional communication and planning friction
You could of course hand-wave here and try to say that since you propose an EA-oriented agency to serve EA orgs this would be less of an issue, but I’m skeptical since if such a model worked I’d expect, for example, not not have ever had a job at a startup and instead have worked for a large firm that specialized in providing tech services to startups. Given that there’s a lot of money at stake in startups, it’s worth considering, for example, if these sorts of challenges will cause your plan to remain unappealing in reality, since the continue with the example most startups that succeed have in-house tech, not outsourced.
I think the obvious challenge here is how to be more inclusive in the ways you suggest without destroying the thing that makes EA valuable. The trouble as I see it is that you only have 4-5 words to explain an idea to most people, and I’m not sure you can cram the level of nuance you’re advocating for into that for EA.
This question on the EA Facebook group got some especially not EA answers. This seems not great given many people possibly first interact with EA via Facebook. I tend to ignore this group and maybe others do the same, but if this post is representative then we probably need to put more effort in there to make sure comments are moderated or replied to so it’s at least clear who is speaking with an EA perspective and who isn’t.
You want more good and less bad in the world? Would it be better if we had a little more good and a little less bad? If so, then we should care about the efficiency of our efforts to make the world better.
*note that I of course here mean something like efficiency that includes Pareto efficiency, not the narrow notion of efficiency we use everyday; you could also say “effective” but you asked for why giving should be effective, and we can ground effectiveness in Pareto efficiency across all dimensions we care about
I’ve been pretty skeptical that mental health is something EAs should focus on. One thing I see lacking in this report (apologies if it’s there and I didn’t find it) seems to be a way of comparing it to alternatives, since I don’t think that mental health is a source of suffering for people is in question, but whether it’s compares favorably to other issues.
For example I’d love something like QALY analysis on mental health that would allow us to compare it to other cause areas more directly.
Having lived with someone who suffered chronic kidney stones, at least within the US, a huge problem in recent years has been the over-reaction to the so-called opioid crisis. The result has been a decreased willingness to actually treat what we might call chronic acute pain, like the kind that comes from kidney stones.
This is a somewhat technical distinction I’m making here. Kidney stone pain is acute in that it has a clear cause that can be remediated. However if someone produces kidney stones chronically (let’s say at least one a month), they are chronically in acute pain. This creates a problem because standard treatment protocols for chronic pain don’t always work because this is a continuous level of pain above what’s normally experienced by chronic pain sufferers, perhaps with the exception of migraines. But since migraine pain is best treated with non-opioid drugs, they don’t run into the same problems as chronic kidney stone sufferers do who need repeated access to opioids to deal with pain that can break through maintenance pain medications.
The result is people left in agony who suffer from chronic kidney stones that are resistant to treatment because of restrictions on opioid drug use in the name of curbing abuse. To make matters worse, treatment can become a catch-22: chronic pain doctors won’t treat such pain because it’s “acute” and at some point other doctors will stop wanting to treat repeated kidney stones because they are “chronic”. The incentives are aligned perfectly to get doctors to not treat these patients since they can risk losing their license for improperly prescribing opioids. It doesn’t matter if it’s valid, all that matters is that it looks suspicious in a database, and doctors would rather avoid that attention than risk it to treat patients (but of course not all doctors are like this, just that there’s a lot of them who follow the incentives rather than work against them in the name of patient care).
Regarding the difference in prevalence between chronic pain in men and women, there’s a tendency, at least within the US medical system, to dismiss women’s pain more often than men. A good example of this is pain resulting for endometriosis, which is often dismissed or downplayed by doctors as “just bad period cramps” rather than a serious source of chronic pain. So too for many other sources of pain unique to women.
I don’t have a source, but my experience is that most of this seems to be due to a variant of the typical mind fallacy: male doctors and some female doctors have never experienced similar pain and so fail to appreciate its severity and sympathize with it less on the margin, being more likely to recommend more conservative treatment rather than more aggressively try to remediate the pain.
My model is the that the global angle is kind of boring: the drug war was pushed by the US, and I expect if the US ends it then other nations will either follow their example or at least drift in random directions with the US no longer imposing the drug war on them by threat of trade penalties.
I think this starts to get at questions of tractability, i.e. how neglected is this contingent on tractability (and vice versa). In my mind this is one of the big challenges of any kind of policy work where there’s already a decent number of folks in the space: you have to have reasonably high confidence that you can do better than everyone else is doing now (and not just that you have an idea for how to do better, but like can actually succeed in executing better) in order for it to cross the bar of a sufficiently effective intervention (in expectation) to be worth working on.
I would expect this not to be very neglected, hence I would expect EAs to be able to have much impact here only if, for example, it’s effectively neglected because the existing people pushing for an end to the drug war are unusually ineffective.
For example, there’s already NORML, who’s been working on cannabis angle of this since the 1970s to decent success, Portugal has already ended the drug war locally, and Oregon recently decriminalized possession of drugs for personal use.
Getting involved feels a bit like getting involved in, say, marriage equality in the 2000s: the change was already clearly in motion, plenty of people were working to push for it, and so there’s not clearly a lot additional that EAs could have brought to the table.
On the one hand I’m in favor of more housing. I live in the SF Bay Area where this is also a problem, and really insufficient housing is a problem for all of California, so I’m naturally supportive of efforts to address this problem. However, I’m not sure this project is a high priority for EAs.
This seems like something that’s not especially neglected (lots of people are thinking about ways to improve the housing situation in American cities) and also unlikely to have high impact in relative terms (viz. globally rich Americans are not suffering as much due to expensive, limited housing in desirable cities as the global poor, animals, or far future beings (in expectation)). Cf. ITN framework for why I’m thinking about these criteria.
I think it would be hard to convince me this is working on something neglected, but I’m pretty open to the idea that I might be wrong about impact, especially if better housing in American cities is somehow on a critical path to other, more obviously higher impact projects. I’d be interested if there are better arguments for why this is impactful enough to be prioritized over other, more obviously high impact causes.
One, I’d argue that hits-based giving is a natural consequence of working through what using “high-quality evidence and careful reasoning to work out how to help others as much as possible” reallying means, since that statement doesn’t say anything about excluding high-variance strategies. For example, many would say there’s high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may not be able to measure that help for a long time and we may make mistakes.
Two, it’s likely a strategic choice to not be in-your-face about high variance giving strategies since they are pretty weird to most people. EA orgs have chosen to develop a public brand that is broadly appealing and not controversial on the surface (even if EA ends up courting controversy anyway because of its consequences for opportunities we judge to be relatively less effective than others). The definitions of EA you point to seem in line with this.