I also hoped to imply that ITN is more than a heuristic. It also serves a rhetorical purpose.
I worry that its seeming simplicity can belie the complexity of cause prioritization. Calculating an ITN rank or score can be treated as the end, rather than the beginning, of such an effort. The numbers can tug the mind in the direction of arguing with the scores, rather than evaluating the argument used to generate them.
My hope is to encourage people to treat ITN scores just as you say—taking them lightly and setting them aside once they’ve developed a deeper understanding of an issue.
Thanks for reading.
Agreed. However, one of the subcritiques in that point is the divide-by-zero issue that makes issues that have received zero investment “theoretically unsolvable.” This is because a % increase in resources from a starting point of 0 will always yield zero. The critic seems to feel it’s a result of dividing up the issue in this way.
I leave it to the forum to judge!
Can you give a few examples? Having options and avoiding risk are both good things, all else being equal.
There’s a range of posts critiquing ITN from different angles, including many of the ones you specify. I was working on a literature review of these critiques, but stopped in the middle. It seemed to me that organizations that use ITN do so in part because it’s an easy to read communication framework. It boils down an intuitive synthesis of a lot of personal research into something that feels like a metric.
When GiveWell analyzes a charity, they have a carefully specified framework they use to derive a precise cost effectiveness estimate. By contrast, I don’t believe that 80k or OpenPhil have anything comparable for the ITN rankings they assign. Instead, I believe that their scores reflect a deeply researched and well-considered, but essentially intuitive personal opinion.
I want to give more context for the MacAskill quote.
The most obvious implication [of the Hinge of History hypothesis], however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.
Here, he is talking about strategies for solving specific problems, X-risks in this case. This is not relevant to the cluelessness argument advanced by Mogensen and that I am addressing. Later in his article, though, he does touch on the topic.
Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment.
Buck-passing, or punting, is compatible with the “debugging” concept, but not with Mogensen’s “cluelessness.” With debugging, you deliberate as long as is possible or productive, and then act as wisely as possible. Once you’ve made a decision, you fix side effect problems as they arise, which might include finding ways to reverse the decision where possible. Although some decisions will result in genuine enormous moral disasters, such as slavery or Nazism, this approach appears to me to be both net good and our only choice.
With Mogensen’s cluelessness argument, it doesn’t matter how long you deliberate, because you have to be able to predict the ripple effects and their moral weights into the far future first. Since that’s impossible, you can never know the moral value of an action. We therefore can’t morally prefer one action over another. I’m not strawmanning this argument. It really is that extreme.
Buck-passing/punting also not identical to “debugging.” In buck-passing or punting, we’re deferring a decision on a specific issue to a wiser future. A current ban on genetically engineered human embryos is an example. In debugging, we’re making a decision, and trusting the future to resolve the unexpected difficulties. Climate change is an example: our ancestors created fossil fuel-based industry, and we are dealing with the unexpected consequences.
The reason I don’t feel the need to engage with the cluelessness literature is because, when sensible, it’s simply providing another approach to describing basic problems from economic theory and common sense, which I understand reasonably well and expect I can learn better from those sources. When done badly, it’s a salad of sophistry with a thick and unnecessary dressing of formal logic. I can’t read everything and I think I’ll learn a lot more of value from studying, oh, almost anything else. These writers need to convince me that they’ve produced insights of value if they want me to engage. I’m just describing why they haven’t succeeded in that project so far.
By the way, I appreciate you responding to my post. Although I’m sure you can see I’ve got little patience for Mogensen and the cluelessness literature I’ve seen more generally, I think it’s important to have conversations about it. And it’s always nice to have someone take an interest.
Her first example of “complex cluelessness” is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I’m not sure it’s a valid distinction. I suspect all cluelessness is complex.
Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the cluelessness argument. Even if we are successful in those efforts and manage to solve the problem, we still cannot predict all the precise long-term consequences. Too much moral dark matter remains. This form of capacity-building cannot stand up to Morgensen and Greaves’ critique, because it doesn’t address the problem they raise.
This debugging model does. Beyond our ability to build capacity to solve specific and known intractable problems, we already and likely always will have capacity to solve problems in general. Unknown unknowns become known, and then we solve them. We keep the good, fix the bad, and develop more wisdom to deal with the ugly.
I’m not planning on engaging further with the cluelessness literature because what I’ve seen makes me think GPI is off track. It strikes me as a combination of sophistry and obscurantism that I find hard to take seriously. This writing was an attempt to get my own thoughts in order. I invite others who find their ideas more compelling to explain why “debugging,” in conjunction with a frank acknowledgement that the future is risky, can’t account for cluelessness.
Same. Keep up the good work. I’m looking forward to hearing more.
In my OP, I just meant that if the applicant gets in, they can teach. Too many applicants doesn’t necessarily indicate that the field is oversubscribed, it just means that there’s a mentorship bottleneck. One possible reason is that senior people in the field simply enjoy direct work more than teaching and choose not to focus on it. Insofar as that’s the case, candidates are especially suitable if they’re willing to focus more on providing mentorship if they get in and a bottleneck remains by the time they become senior.
Thanks for the feedback, it helps me understand that my original post may not have been as clear as I thought.
in the absence of other empirical information, I think it’s a safe assumption that present bottlenecks correlate with future bottlenecks, though your first point is well taken.
I’m not quite following your second argument. It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don’t understand why. Enlighten me?
Your third point is also correct. Stated generally, finding ways to increase the availability of the primary bottlenecked resource, or accomplish the same goal while using less of it, is how we can get the most leverage.
There are already at least three companies in this space: RoomieMatch, Roomi, and Roomster. I wonder why nobody I know uses them, but dating apps are very popular?
It seems to me that the triangulation, trust, and transfer problems in roommate matching that go beyond what OKCupid has to deal with:
There are more than two people involved, and the difficulty of finding communal compatibility complexifies geometrically with the number of roommates.
By the same token, people moving in and out happens more frequently with larger numbers of room mates, often with short notice, making it hard to keep a stable equilibrium of preferences.
Imagine if it was easy to “date your future housemates,” perhaps by living together for a month. It’s already emotionally painful for people to deal with or inflict rejection in one-on-one dating. Imagine being the “odd man out” in this situation. That sounds like a recipe for really uncomfortable social dynamics.
People who rent because they can’t afford their own place probably can’t afford a high-touch service. People who have more money could buy their own place and interview enough room mates to make sure everyone is a good fit with them personally.
Land lords often influence or even entirely control the process of finding new room mates. There are also laws around evictions that make it very difficult to kick somebody out if its not working for others, whereas there are no legal barriers to breaking up with someone you’re dating if there’s no marriage and no kids.
There’s a much higher effort and commitment barrier required to move than to go on a date.
This is speculative, but OKCupid’s success may stem from capitalizing on a cultural institution that makes romantic love feel of vast importance. By contrast, finding an ideal group of room mates doesn’t have the same cultural importance: we still dream of having our own place by ourselves or with our own biological family. To have comparable success, such a service would need to create a new dream. Even if that’s your dream, is it the dream of your housemates?
Similarly, the service OKCupid provides may be less in matching people with compatible characteristics, and more in identifying an abundance of single people and getting them hyped to go on a date. The purpose of the “matching” is to trick you into building up anticipation, not to ensure a really good fit (after all, if it did that too well, people wouldn’t come back for more!). Instinct, hormones, and love do most of the work of making people stick together in the end.
When people do try and start intentional group houses, they’re often organized around a shared social movement, which already have word-of-mouth and social media channels where people can learn about these opportunities for free.
I think a company would do better to work on solving one or more of these problems.
Crossposted from the LW forum
Good point, fixed.
It’s just that your first comment sounded a bit like you’re implying that 10% of the population suffers from excruciating kidney stones. With your estimated numbers (10% of population affected at some point in their lives, 2% of cases at 9⁄10 on the pain scale), it would be more like 0.2%.
That’s probably still a lot if you multiply by the world population and total pain episode lengths. I don’t know how long such a case typically lasts with modern medical care, but plenty of people don’t have access to it.
Of course, this all depends on whether the 2% number is a reasonable estimate, and whether the pain scale is exponential.
But my guess is that a better strategy will probe better medical prevention and treatment of underlying causes in most cases. After all, flooding the USA with powerful painkillers hasn’t exactly been a boon to the nation (see opioids).
I’ll give that some thought, but I’m no expert on this. Just pulling together some memories of things I’ve read and experiences I’ve had. But my impression is that chronic extreme pain is something that we never adapt to.
A top Google hit for “extinguish coal seam fires” says the gov paid $42 million to relocate Centralians when their early attempts to put it out failed. That suggests to me that they had a much higher estimate than you about the cost of putting it out.