I’m a researcher in psychology and philosophy.
Stefan_Schubert
Some of the listed policy levers seem in themselves insufficient for the government’s policy to qualify as soft nationalization. For instance, that seems true of government contracts and some forms of government oversight. You might consider coming up with another term to describe policies that are towards the lower end of government intervention.
In general, you focus on the contrast between soft and total nationalization, but I think it could also be useful to make contrasts with lower levels of government intervention. In my view, there’s a lot of ground between a hands-off approach and soft nationalization. Most industries (e.g. in the US) have a lot of regulation—and so the government doesn’t take a hands-off approach—yet haven’t been subjected to soft nationalization, as I’d use that term.
(Tbc this is a purely conceptual point and not an argument for or against any particular level of government intervention.)
I don’t think one can infer that without having the whole distribution across different countries. It may just be that small countries have greater variance. (Though I don’t know what principle the author used for excluding certain countries.)
I agree with that.
Also, notice that the top countries are pretty small. That may be because random factors/shocks may be more likely to push the average up or down for small countries. Cf:
Kahneman begins the chapter with an example of data interpretation using cases of kidney cancer. The lowest rates of kidney cancer are in counties that are rural and vote Republican. All sorts of theories jump to mind based on that data. However, a few paragraphs later Kahneman notes that the data also shows that the counties with the highest rates of kidney cancer are rural and vote Republican. The problem is that rural counties have small sample sizes and therefore are prone to extremes.
@Lucius Caviola and I discuss such issues in Chapter 9 of our recent book. If I understand your argument correctly I think our suggested solution (splitting donations between a highly effective charity and the originally preferred “favourite” charity) amounts to what you call a barbell strategy.
Detail, but afaict there were at least five Irish participants.
I was going to make a point about a ‘lack of EA leadership’ turning up apart from Zach Robinson, but when I double-checked the event attendee list I think I was just wrong on this. Sure, a couple of big names didn’t turn up, and it may depend on what list of ‘EA leaders’ you’re using as a reference, but I want to admit I was directionally wrong here.
Fwiw I think there was such a tendency.
There’s already a thread on this afaict.
EA’s CEO says Sam Bankmann-Fried was never an effective altruist
I don’t think the piece says that.
Thanks, this is great. You could consider publishing it as a regular post (either after or without further modification).
I think it’s an important take since many in EA/AI risk circles have expected governments to be less involved:
https://twitter.com/StefanFSchubert/status/1719102746815508796?t=fTtL_f-FvHpiB6XbjUpu4w&s=19
It would be good to see more discussion on this crucial question.
The main thing you could consider adding is more detail; e.g. maybe step-by-step analyses of how governments might get involved. For instance, this is a good question that it would be good to learn more about:
“does it look more like much more regulations or international treaties with civil observers or more like almost-unprecedented nationalization of AI as an industry[?]”
But of course that’s hard.
Thanks, no worries.
I don’t find it hard to imagine how this would happen. I find Linch’s claim interesting and would find an elaboration useful. I don’t thereby imply that the claim is unlikely to be true.
Thanks, I think this is interesting, and I would find an elaboration useful.
In particular, I’d be interested in elaboration of the claim that “If (1, 2, 3), then government actors will eventually take an increasing/dominant role in the development of AGI”.
The reasoning is that knowledgeable people’s beliefs in a certain view is evidence for that view.
This is a type of reasoning people use a lot in many different contexts. I think it’s a valid and important type of reasoning (even though specific instances of it can of course be mistaken).
Some references:
https://plato.stanford.edu/entries/disagreement/#EquaWeigView
https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty
Yes; it could be useful if Stephen briefly explained how his classification relates to other classifications. (And which advantages it has—I guess simplicity is one.)
Thoughtful post.
If you’re perceived as prioritising one EA cause over another, you might get pushback (whether for good reason or not). I think that’s more true for some of these suggestions than for others. E.g. I think having some cause-specific groups might be seen as less controversial than having varying ticket prices for the same event depending on the cause area.
I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.
If that’s so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It’s possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) (“the conclusion”). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.
If that’s the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don’t do).
(Effectively, I’m saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)A corollary of this is that it’s maybe not as common as one might think that “a little knowledge” is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they’ll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.
My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that “a little knowledge” is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.
In general, I think a little knowledge is usually beneficial, meaning our prior that it’s harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.
How much of this is lost by compressing to something like: virtue ethics is an effective consequentialist heuristic?
It doesn’t just say that virtue ethics is an effective consequentialist heuristic (if it says that) but also has a specific theory about the importance of altruism (a virtue) and how to cultivate it.
There’s not been a lot of systematic discussion on which specific virtues consequentialists or effective altruists should cultivate. I’d like to see more of it.
@Lucius Caviola and I have written a paper where we put forward a specific theory of which virtues utilitarians should cultivate. (I gave a talk along similar lines here.) We discuss altruism but also five other virtues.
Another factor is that recruitment to the EA community may be more difficult if it’s perceived as very demanding.
I’m also not convinced by the costly signalling-arguments discussed in the post. (This is from a series of posts on this topic.)
I think this discussion is a bit too abstract. It could be helpful with concrete examples of non-academic EA research that you think should have been published in academic outlets. It would also help if you would give some details of what changes they would need to make to get their research past peer reviewers.
Right. I think it could be useful to be quite careful about what terms to use since, e.g. some who might actually be fine with some level of monitoring and oversight would be more sceptical of it if it’s described as “soft nationalisation”.
You could search the literature (e.g. on other industries) for existing terminology.
One approach could be to use terminology that’s explicit about there being a spectrum. E.g. you could use terms like “tiers”, “steps”, “spectrum”, etc. And then you could argue that the US government’s approach is unlikely to be at either end of the spectrum (hands-off or total nationalisation).