âWho framed it in terms of individual rights?â
Nuno did. Iâm not criticizing you or suggesting this legislation is other than bad.
David Mathersđ¸
One reason to be suspicious of taking into account lost potential lives here is that if you always do so, it looks like you might get a general argument for âdevelopment is badâ. Rich countries have low fertility compared to poor countries. So anything that helps poor countries develop is likely to prevent some people from being born. But it seems pretty strange to think we should wait until we find out how much development reduces fertility before we can decide if it is good or bad.
A bit of a tangent in the current context, but I have slight issues with your framing here: mechanisms that prevent the federal government telling the state governments what to do are not necessarily mechanisms that protect individuals citizens, although they could be. But equally, if the federal government is more inclined to protect the rights of individual citizens than the state government is, then they are the opposite. And sometimes framing it in terms of individual rights is just the wrong way to think about it: i.e. if the federal government wants some economic regulation and the state government doesnât, and the regulation has complex costs and benefits that work out well for some citizens and badly for others, then âis it the feds or the state government protecting citizenâs rightsâ might not be a particularly helpful framing.
This isnât just abstract, historically in the South, it was often the feds who wanted to protect Black citizens and the state governments who wanted to avoid this under the banner of stateâs rights.
I am biased because Stuart is an old friend, but I found this critique of the idea that social media use causes poor mental health fairly convincing when I read it: https://ââwww.thestudiesshowpod.com/ââp/ââepisode-25-is-it-the-phones Though obviously you shouldnât just make your mind up about this based on a single source, and thereâs might be a degree of anti-woke and therefore anti-anti-tech bias creeping in.
In principle, or only in practice?
I have some sympathy with that view, except that I think this is a problem for a much wider class of views than utiliarianism itself. The problem doesnât (entirely) go away if you modify utilitarianism in various attractive ways like âdonât violate rightsâ, or âyour allowed/âobligated to favour friends and family to some degreeâ or âdoing the best thing is just good, not obligatory. The underlying issue is that it seems silly to ever think you can do more good by helping insects than more normal beneficiaries, or that you can do more good in a galaxy-brained indirect way than directly, but there are reasonably strong theoretical arguments that those claims are either true, or at least could be true for all we know. That is an issue for any moral theory that says we can rank outcomes by desirability, regardless of how they think the desirability of various outcomes factors into determining what the morally correct action is. And any sane theory, in my view, thinks that how good/âbad the consequences of an action are is relevant to whether you should do it, whether or not other things are also relevant to whether the action should be performed.
Of course it is open to the non-consequentialist to say that goodness of consequences are sometimes relevant, but never with insects. But that seems a like cheating to me unless they can explain why.
What should be done about the possibility that insects or anthropods are conscious and affected by our interventions in your view?
EDITED to add: Just reviving the idea that its ok to favour humans over animals to a very high degree wonât help here, since itâs animal versus animal interests we are dealing with.
I believe AI environmental damage is a key priority (compared to other environmentally damaging activities)
No expertise but my prior is not because of all industries that use electricity only a few can be amongst the most significant drivers of climate change.
But the critique also mentions that Ord says the long reflection could involve the wider public and that he admits other disciplines will be important too. I think you are just reacting to the fact that he clearly doesnât like Ord or longtermism, and that he thinks that even Ordâs moderate position is still elitist. Thatâs different from misrepresentation of a kind that makes him an untrustworthy source.
Why do you think the quote from Ord shows the characterization to be unfair? (At a glance at the paper, I agree that the argument for saying Will must be some kind of radical minarchist libertarian if consistent simply because he says political experimentation is valuable is pretty weak.)
âThrowing soup at van gogh paintings have none of these attributes, so it is counter-productive.â
Whatâs the evidence it was counterproductive?
What would be evidence for sentience in your view?
It doesnât follow from there being no clear definition of something that there arenât clear positive and negative cases of it, only that itâs blurry at the boundaries. For example, suppose the only things that existed were humans, rocks, and lab grown human food. There still wouldnât be a clear definition of âconsciousâ, but it would be clear only humans were conscious, since lab grown meat and veg and rocks clearly donât count on any intepretation of âconsciousnessâ. Maybe all mites obviously donât count too. I agree with you that BB canât just assume that about mites though, and needs to provide an argument.
What about the argument that there are so many of them that even a tiny chance they are conscious is super-important?
Presumably there are at least some people who have long timelines, but also believe in high risk and donât want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. Itâs hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They donât have to be raging dogmatists to worry about this happening again, and itâs reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*Iâm less surely merely catastrophic biorisk from human misuse is low sadly.
I donât think you can possibly know whether they really are actually thinking of the unconditional probabilities or whether they just have very different opinions and instincts from you about the whole domain which make very different genuinely conditional probabilities seem reasonable.
I donât find accusations of fallacy helpful here. The authorâs say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that theyâre estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I donât think âyou are committing a fallacyâ is a very good or fair way to describe âI disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrongâ.
âA fraudulent charityâ does not sound to me much like âa charity that knowingly used a mildly overoptimistic figure for the benefits of one of its programs even after admitting under pressure it was wrongâ. Rather, I think the rhetorical force of the phrase comes mostly from the fact that to any normal English speaker it conjures up the image of a charity that is a scam in the sense that it is taking money, not doing charitable work with it, and instead just putting it into the CEOâs (or whoeverâs) personal bank account. My feeling on this isnât really effected by whether the first thing meets the legal definition of fraud, probably it does. My guess is that many charities that almost no one would describe as âfraudulent organizationsâ have done something like this or equivalently bad at some point in their histories, probably including some pretty effective ones.
Not that I think that means Singeria have done nothing wrong. If they agree the figure is clearly overoptimistic they should change it. Not doing so is deceptive, and probably it is illegal. But I find it a bit irritating that you are using what seems to me to be somewhat deceptive rhetoric whilst attacking them for being deceptive.
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.
âMore generally, I am very skeptical of arguments of the form âWe must ignore X, because otherwise Y would be badâ. Maybe Y is bad! What gives you the confidence that Y is good? If you have some strong argument that Y is good, why canât that argument outweigh X, rather than forcing us to simply close our eyes and pretend X doesnât exist?â
This is very difficult philosophical territory, but I guess my instinct is to draw a distinction between:
a) ignoring new evidence about what properties something has, because that would overturn your prior moral evaluation of that thing.
b) Deciding that well-known properties of a thing donât contribute towards it being bad enough to overturn the standard evaluation of it, because you are committed to the standard moral evaluation. (This doesnât involve inferring that something has particular non-moral properties from the claim that it is morally good/âbad, unlike a).)
A) feels always dodgy to me, but b) seems like the kind of thing that could be right, depending on how much you should trust judgments about individual cases versus judgements about abstract moral principles. And I think I was only doing b) here, not a).
Having said that, I remember a conversation I had in grad school with a faculty member who was probably much better at philosophy than me claimed that even a) is only automatically bad if you assume moral anti-realism.