Thanks!Yes! You’re totally right that going down the citation trail with the right paper can be better than search, I just edited to reflect that.
This spreadsheet seems great. So far we’ve only found ways to practice the early parts of literature review so we never created anything so sophisticated but that seems like a good method
Iris.ai sounds potentially useful, I’ll definitely check it out!
So far we’ve done some things on inspectional note-taking, finding the logical argument structure of articles, and breaking down questions into subquestions. I’m not too sure what the next big thing will be though. Some other ideas have been to practice finding flaws in articles (but it takes a bit too long for a 2hr session and is too field specific), abstract writing, making figures, and picking the right research question. I haven’t been spending too much time on this recently though so the ideas for actually implementing these aren’t top of mind
That said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn’t expect to diverge very much from how other successful movements have happened in the past as there’s not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it’s own awkward consequences)
I agree with some of what you say, but find myself less concerned about some of the trends. This might be because I have a higher tolerance for some of the traits you argue are present and because AI governance, where I’m mostly engaged now, may just be a much more uncertain topic area than other parts of EA given how new it is. Also, while I identify a lot with the community and am fairly engaged (was a community leader for two years), I don’t engage much on the forum or online so I might be missing a lot of context.
I worry about the framing of EA as not having any solutions and the argument that we should just focus on finding which are the right paths without taking any real-world action on the hypotheses we currently have for impact. I think to understand things like government and refine community views of how to affect it and what should be affected, we need to engage. Engaging quickly exposes ignorance and forces us to be beholden to the real world, not to mention gives a lot of reason to engage with people outside the community.
Once a potential path to impact is identified, and thought through to a reasonable extent, it seems almost necessary to try to take steps to implement it as a next step in determining whether it is a fruitful thing to pursue. Granted, after some time we should step back and re-evaluate, but for the time when you are pursuing the objective it’s not feasible to be second-guessing constantly (similar idea to Nate Soare’s post Diving in).
That said it seems useful to have a more clear view from the outside just how uncertain things are. While beginning to engage with AI governance, it took a long time for me to realize just how little we know about what we should be doing. This despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do. I’m not sure what more could be done as I think it’s normal to assume people know what they’re doing, and for me this was only solved by engaging more deeply with the community (though now I think I have a more healthy understanding of just how uncertain most topic areas are).
I guess a big part of the disagreement here might boil down to how uncertain we really are about what we are doing. I would agree a lot more with the post if I was less confident about what we should be doing in general (and again I frame this mostly in AI governance area as it’s what I know best). The norms you advocate are mostly about maintaining cause agnosticism and focusing on deliberation and prioritization (right?) as opposed to being more action oriented. In my case, I’m fairly happy with the action-prioritization balance I observe than I guess you are (though I’m of course not as familiar with how the balance looks elsewhere in the community and don’t read the forum much).
I think your critique of the ITN framework might be flawed. (though I haven’t read section 2 yet). I assume some of my critique must be wrong as I still feel a bit confused about it, but I really need to get back to work...
One point that I think is a bit confusing is that you use the term marginal cost-effectiveness. To my knowledge this is not an acknowledged term in economics or elsewhere. What I think you mean instead is the average benefit given a certain amount of money.
Cost-effectiveness is (according to wikipedia at least) generally expressed at something like: 100USD/QALY. This is done by looking at how much a program costs and how many QALYs it created. So we get the average benefit of each $100 dollars for the program by doing this. However, we gain no insight as to what happened inside of the program. Maybe the first 100USD did all the work and the rest ended up being fluff, we don’t know. More likely would be that the money had diminishing marginal returns.
When talking about tractability you say:
with importance and tractability alone, you could calculate the marginal cost-effectiveness of work on a problem, which is ultimately what we care about
You would know cost-effectiveness if you knew the amount spent so far/amount of good done. You know the amount spent from neglectedness but don’t know the amount already done with the money spent. I guess marginal cost-effectiveness = average benefit from X more dollars. Let’s say that this is doubling the amount spent so far. I don’t think we can construe this as marginal though as doubling the money is not an ‘at the margin’ change. I think then that tractability gives you average benefit from X more dollars (so no need for scale).
We still need neglectedness and scale though to do a proper analysis.
Scale because if something wasn’t a big problem, why solve it? And to look at neglectedness let’s use some made-up numbers:
Say that we as humanity have already spent 1 trillion USD on climate change (we use this to measure neglectedness) and got a 1% reduction in risk of an extinction event (use this to calculate the amount of good = .01* present value of all future lives). That gives us cost-effectiveness (cost/good done). We DON’T know however what happens at the margin (if we put more money in). We just have an average. Assuming constant returns may seem (almost) reasonable on an intervention like bed net distribution but it seems less reasonable when we’ve already spent 1 trillion USD on a problem. Then what we really need to know is the benefit of, say, another 1 trillion USD. This I think is what 80k’s tractability measure is trying to get at. The average benefit (or cost-effectiveness) of another hunk of money/resources.
So defending neglectedness a bit. If we think that the marginal benefit to more money is not constant (which seems eminently reasonable) then it makes sense to try to find out where we are on the curve. Neglectedness helps to show us where we might be on the curve, even though we have little idea what the curve looks like (though I would generally find it safe to assume decreasing marginal returns). If we’re on the flat bit of the diminishing marginal returns curve then we sure as hell want to know, or at least find evidence which would indicate that to be likely.
So then neglectedness is trying to find where we are on the curve, which will help us understand the marginal return to one more person/dollar entering (the true margin). This might mean that even if a problem is unsolvable there might be easy gains to be had in terms of reducing risk on the margin. For something that is neglected but not tractable we might be able to have huge benefits by throwing a few people/dollars in (get x-risk reductions f.ex) but that might peter off really quickly thus making it untractable. It would be less attractive overall then because putting a lot of people in would not be worth it.
Tractability says, if we were to dump lot’s more money, what are the average returns going to look like. If we are now at the flat part of the curve average returns might be FAR lower than they were in a cost-effectiveness analysis (average returns of past spending) of what we already spent.
Maybe new intuitions for these:
Neglectedness: How much bang for the buck do we get for one more person/dollar?
Tractability: Is it worth dumping lot’s of resources into this problem?
Just to play devil’s advocate with some arguments against peace (in a not so well thought out way)… There’s a book called ‘The Great Leveler’ which puts forward the hypothesis that the only time when widespread redistribution has happened is after wars. This means that without war we might expect consistently rising inequality. This effect has been due to mass mobilization (‘Taxing the Rich’ asserts that there has only been mass political willpower to increase redistribution with the claims of veterans having served and feeling they should be compensated)
anddestructionn of capital (in Europe much of the capital was destroyed in WW2 → massive decrease in inequality, US less so on both front) (haven’t read the book though).
Spinning this further we could be approaching a time where great power war would not have this effect. This is because less labor is required and it would be higher skilled. Perhaps there would be little use for low skilled grunts in near future wars (or already). If we also saw less destruction of capital (maybe information warfare is the way of the future?) Then we lose the mechanisms which made war a leveller in the past.
SO we might be in the last time where a great power war (one of the only things we know reduces inequality) would be able to reduce inequality. If inequality continues to increase we could see suboptimal societal values which could continue on indefinitely and/or cause large amount of suffering the mediumrun. This could also lead to more domestic unrest in medium-run which would imply a peace now vs peace later trade-off. Depending on how hingey the moment is for the long-term future now, it could be better to have peace later.
ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars?
Anyway… Even after considering that, peace and cooperation is probably good on net, but not as obvious as it may seem.
(Wrote this on mobile, sorry for any errors and lack of having read more than a few pages of the books I cited)
I always recommend Nate Soares’ post ‘On Caring’ to motivate the need for rational analysis of problems when trying to do good. http://mindingourway.com/on-caring/
It took a surprisingly long time to find anything on real wage trends in Europe but it looks like, judging by the graphs on page 5 of this paper that Sweden, Norway, and in part the UK are exceptions to quite slow real-wage growth. Germany, France, Italy, Spain, and Denmark follow the wage stagnation of the US.
I very much agree though that my analysis is very focused on the US (and the discussion in general). This paper demonstrates that at least on a micro level there are demonstrated effects on wages and employment from automation in the UK. Says
I guess I’d conclude roughly that stagnation is happening in many (if not all) developed countries. I would wager that automation plays some role, though I would guess that role is relatively small in the grand scheme of things (for now).
Money in elections:
I think even if that theory were true though, I would argue that campaign techniques are improving (a la Cambridge Analytica, AgreggateIQ) such that in the near future money may be more persuasive. I don’t think we’ve really seen a campaign between two tech-savvy politicians willing to pay top dollar for voter manipulation (first one in 2020?) but if we did I would imagine campaign contributions to grow in importance. It would definitely be interesting to dive into this a bit more though.
Pace of Automation:
Yes… I agree this is a major blindspot. I haven’t looked at this literature much at all and don’t really feel qualified to make serious assessments on the quality of the many predictions. I agree there should be something there though. I will add a few sentences following the ILO’s literature review on the Future of Work to give people an idea of what is being talked about
Yeah I tend to agree that sending the whole thing is unnecessary. The first 17 chapters of printed version distributed at CFAR workshops (I think, haven’t actually been to one) is enough to get people engaged enough to move to the online medium. I’m guessing sending just that small-looking book will make people more likely to read it as seeing a 2k page book would definitely be intimidating enough to stop many from actually starting.
I do tend to think giving the print version is useful as it incurs some sort of reciprocity which should incentivize reading it.
I agree that a quick and decisive input from someone very knowledgeable about EA and the topic involved would be very useful and save a lot of time and indecision for people evaluating career options.
I think we can provide a bit of this though through more engaged online communities around given topic areas. Not nearly as good as in person talks but people can at least get some general feedback on career ideas. I’m hoping to host an event later this year that will gather people interested in a cause area and use that as a catalyst to form a more cohesive online community. As far as I can tell (and in my experience) people tend not to engage much in an online community if they don’t really know the people well. Though it’s definitely true that some people are more than happy to engage with people they don’t know.
I don’t know how this could move forward but it seems like someone could potentially make a difference by engineering Facebook or Slack groups focused on certain cause areas to be more active places for general discussion and career advice. This would be so helpful for people who lack close contact with knowledgeable people in EA or within their cause area.
Yes! Totally agree. I think I mentioned very briefly that one should also be wary of social dynamics pushing toward EA beliefs, but I definitely didn’t address it enough. Although I think the end result was positive and that my beliefs are true (with some uncertainty of course), I would guess that my update toward long-termism was due in large part to lot’s of exposure to the EA community and from the social pressure that brings.
I basically bought some virtue signaling in the EA domain at the cost of signaling in broader society. Given I hang out with a lot of EAs and plan to do so more in the future, I’d guess that if I were to rationally evaluate this decision it would look net positive in favor of changing toward long-termism (as you would also gain within the EA community by making a similar switch, though with some short-term itoldyouso negative effects).
So yes, I think it was largely due to closer social ties to the EA community that this switch finally became worthwhile and perhaps this was a calculation going on at the subconscious level. It’s probably no coincidence that I finally made a full switch-over during an EA retreat where the broad society costs of switching beliefs was less salient and the EA benefits much more salient. To have the perfect decision-making situation I guess it would be nice to have equally good opportunities in communities representing every philosophical belief, but for now seems a bit unlikely. I suppose it’s another argument for cultivating diversity within EA.
This brings up a whole other rabbit hole in terms of thinking about how we want to appeal to people with some interest in EA but not yet committed to the ideas. I think the social aspect is probably larger than many might think. Of course if we emphasized this we’re limiting people’s choice to join EA in a rational way. But then what is ‘choice’ really given the social construction of our personalities and desires....