Karma: 860

# Re­duc­ing global AI com­pe­ti­tion through the Com­merce Con­trol List and Im­mi­gra­tion re­form: a dual-pronged approach

3 Sep 2024 5:28 UTC
11 points
• 3 Sep 2024 3:13 UTC
1 point
0 ∶ 0
in reply to: Anthony DiGiovanni’s comment

You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.

For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.

We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn’t 10% or more.

You could model the distribution of your uncertainty with, say, a beta distribution of .

You might wonder, why b=100 and not b=200, or 101? It’s an arbitrary choice, right?

To which I have two responses:

1. You can go one level up and model the beta parameter on some distribution of all reasonable choices, say, a uniform distribution between 10 and 1000.

2. While it is arbitrary, I claim that avoiding expected effects because we can’t make a fully non-arbitrary choice is itself an arbitrary choice. This is because we are acting in a dynamic world where every second, opportunities can be lost, and no action is still an action, the action of foregoing the counterfactual option. So by avoiding assigning any outcome, and acting accordingly, you have implicitly, and arbitrarily, assigned an outcome value of 0. When there’s some morally outcome we can only model with some somewhat arbitrary statistical priors, doing so nevertheless seems less arbitrary than just assigning an outcome value of 0.

• This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn’t (by default) include a weighting amongst the set.

It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.

If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you’re back to using a single probability function. I think that’s probably what you should do. But that sounds to me indistinguishable from the naive response.

The idea of a “precise probability function” is in general flawed. The whole point of a probability function is you don’t have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That’s the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.

• As Yann LeCun recently said, “If you do research and don’t publish, it’s not science.”

With all due respect to Yann LeCun, in my view he is as wrong here as he is dismissive about the risks from AGI.

Publishing is not an intrinsic and definitional part of science. Peer reviewed publishing definitely isn’t—it has only been the default for several decades to a half century or so. It may not be the default in another half century.

• If Trump still thinks AI is “maybe the most dangerous thing” I would be wary of giving up on chances to leverage his support on AI safety.

In 2022, individual EAs stood for elected positions within each major party. I understand there are Horizon fellows with both Democrat and Republican affiliations.

If EAs can engage with both parties in those ways, added to the fact the presumptive Republican nominee may be sympathetic, I wouldn’t give up on Republican support for AI safety yet.

• ha I see. Your advice might be right but I don’t think “consciousness is quantum”. I wonder if you could say what you mean by that?

Of course I’ve heard that before. In the past when I have heard people say that before, it’s by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:

1. Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn’t give you reassurance about AGI. We could have a formal definition of intelligence, and causal instantiations of it, without any qualia what-its-like-to-be subjective consciousness existing in the system. There is also conscious experience with minimal intelligence, like experiences of raw pleasure, pain, or observing the blueness of the sky. As I explain in the linked post, consciousness is also orthogonal to agency or goal-directed behavior.

2. There’s a great deal of research about consciousness. I described one account in my post, and Nick Humphrey does go out on a limb more than most researchers do. But my sense is most neuroscientists of consciousness endorse some account of consciousness roughly equivalent to Nick’s. While probably some (not all or even a majority) would concede the hard problem remains, based on what we do know about the structure of the physical substrates underlying consciousness, it’s hard to imagine what role “quantum” would do.

3. It fails to add any sense of meaningful free will, because a brain that makes decisions based on random quantum fluctuations doesn’t in any meaningful way have more agency than a brain that makes decisions based on pre-determined physical causal chains. While a [hypothetical] quantum-based brain does avoid being pre-determined by physical causal chains, now it is just pre-determined by random quantum fluctuations.

4. Lastly I have to confess a bit of prejudice against this view. In the past it seems like this view has been proposed so naively it seems like people are just mashing together two phenomena that no one fully understands, and proposing they’re related because ???? But the only thing they have in common, as far as I know, is that we don’t understand them. That’s not much of a reason to believe in a hypothesis that links them.

5. Assuming your view was correct, if someone built a quantum computer, would you then be more worried about AGI? That doesn’t seem so far off.

• 12 Feb 2024 0:48 UTC
4 points
1 ∶ 0
in reply to: Elliot Billingsley’s comment

Elliot has a phenomenally magnetic personality and is consistently positive and uplifting. He’s generally a great person to be around. His emotional stamina gives him the ability to uplift the people around him and I think he is a big asset to this community.

• TLDR: I’m looking for researcher roles in AI Alignment, ideally translating technical findings into actionable policy research

Skills & background: I have been an local EA community builder since 2019. I have a PhD in social psychology and wrote my dissertation on social/​motivational neuroscience. I also have a BS in computer science and spent two years in industry as a data scientist building predictive models. I’m an experienced data scientist, social scientist, and human behavioral scientist.

Location/​remote: Currently located on the West Coast of the USA. Willing to relocate to the Bay area for sufficiently high renumeration, or to Southern California or Seattle for just about any suitable role. Would relocate to just about anywhere including the USA east coast, Australasia, the UK, or China for a highly impactful role.

Availability & type of work: I finish work teaching at the University of Oregon around April, and if I haven’t found something by then, will be available again in June. I’m looking for full-time work from there or part time work in impactful roles for an immediate start.

Brief resume

Email/​contact: benjsmith@gmail.com

Other notes: I don’t have strong preference for cause areas and would be highly attracted to roles reducing AI existential risk, or improving animal welfare and global health, or our understanding of the long-term future. I suspect my comparative advantage is in research roles (broadly defined) and in data science work; technical summaries for AI governance or Evals work might be a comparative advantage.

# Bi­den-Har­ris Ad­minis­tra­tion An­nounces First-Ever Con­sor­tium Ded­i­cated to AI Safety

9 Feb 2024 6:40 UTC
15 points
(www.nist.gov)
• But I would guess that pleasure and unpleasantness isn’t always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.

This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there’s a certain kind of recurrent cortical processing of the signal which can loosely be described as “sensation”. I mean that very loosely; it even can include memories of physical events or semantic thought (which you might understand as a sort of recall of auditory processing). Without that recurrent cortical processing modeling the reward and learning process, probably all that midbrain dopaminergic activity does not get consciously perceived. Perhaps it does, indirectly, when the dopaminergic activity (or lack thereof) influences the sorts of sensations you have.

But I’m getting really speculative here. I’m an empiricist and my main contention is that there’s a live issue with unknowns and researchers should figure out what sort of empirical tests might resolve some of these questions, and then collect data to test all this out.

• I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination)....Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.

I think most of those things actually can be reduced to sensations; moods can’t be, but then, are moods consciously experienced, or do they only predispose us to interpret conscious experiences more positively or negatively?

(Edit: another set of sensations you might overlook when you think about conscious experience of mood are your bodily sensations: heart rate, skin conductivity, etc.)

But this also seems like the thing that’s more morally important to look into directly. Maybe frogs’ vision is blindsight, their touch and hearing are unconscious, etc., so they aren’t motivated to engage in sensory play, but they might still benefit from conscious unpleasantness and aversion for more sophisticated strategies to avoid them. And they might still benefit from conscious pleasure for more sophisticated strategies to pursue pleasure.

They “might” do, sure, but what’s your expectation they in fact will experience conscious pleasantness devoid of sensations? High enough to not write it off entirely, to make it worthwhile to experiment on, and to be cautious about how we treat those organisms in the meantime—sure. I think we can agree on that.

But perhaps we’ve reached a sort of crux here: is it possible, or probable, that organisms could experience conscious pleasure or pain without conscious sensation? It seems like a worthwhile question. After reading Humphrey I feel like it’s certainly possible, but I’d give it maybe around 0.35 probability. As I said in OP, I would value more research in this area to try to give us more certainty.

If your probability that conscious pleasure and pain can exist without conscious sensation is, say, over 0.8 or so, I’d be curious about what leads you to believe that with confidence.

• To give a concrete example, my infant daughter can spend hours bashing her toy keyboard with 5 keys. It makes a sound every time. She knows she isn’t getting any food, sleep, or any other primary reinforcer to do this. But she gets the sensations of seeing the keys light up and a cheerful voice sounding from the keyboard’s speaker each time she hits it. I suppose the primary reinforcer just is the cheery voice and the keys lighting up (she seems to be drawn to light—light bulbs, screens, etc).

During this activity, she’s playing, but also learning about cause and effect—about the reliability of the keys reacting to her touch, about what kind of touch causes the reaction, and how she can fine-tune and hone her touch to get the desired effect. I think we can agree that many of these things are transferable skills that will help her in all sorts of things in life over the next few years and beyond?

I’m sort of conflating two things that Humphrey describes separately: sensory play, and sensation seeking. In this example it’s hard to separate the two. But Humphrey ties them both to consciousness, and perhaps there’s still something we can learn from about an activity that combines the two together.

In this case, the benefits of play are clear, and I guess the further premise is that consciousness adds additional motivation for sensory play because, e.g., it makes things like seeing lights, hearing cheery voices much more vivid and hence reinforcing, and allows the incorporation of those things with other systems that enable action planning about how to get the reinforcers again, which makes play more useful.

I agree this argument is pretty weak, because we can all agree that even the most basic lifeforms can do things like approach or avoid light. Humphrey’s argument is something like the particular neurophysiology that generates consciousness also provides the motivation and ability for play. I think I have said about as much as I can to repeat the argument and you’d have to go directly to Humphrey’s own writing for a better understanding of it!

• Yes I see that is a reasonable thing to not be convinced about and I am not sure I can do justice to the full argument here. I don’t have the book with me, so anything else I tell you is pulling from memory and strongly prone to error. Elsewhere in this comments section I said

When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you’ve learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations but not a propensity to play in a way that helps them to learn about those sensations.

And

Humphrey theorises that the evolutionary impulse for conscious sensations includes (1) the development of a sense of self (2) which in turn allows for a sense of other, and theory of mind. He thinks that mere unconscious perception can’t be reasoned about or used to model others because, being unconscious, it is inaccessible by the global workspace for that kind of use. in contrast, conscious sensations are accessible in the global workspace and can be used to imagine the past, future, or what others are experiencing. The cognitive and sensory empathy that allows can enable an organism to behave socially, to engage in deceit or control, to more effectively care for another, to anticipate what a predator can and can’t see, etc.

I believe the idea is something like sentience enables a lot more opportunity to learn about the world, and learning opportunities can be obtained through play. Not taking those opportunities if you’re able is sort of like leaving free adaptive money on the table.

• To me “conscious pleasure” without conscious sensation almost sounds like “the sound of one hand clapping”. Can you have pure joy unconnected to a particular sensation? Maybe, but I’m sceptical. First, the closest I can imagine is calm joyful moments during meditation, or drug-induced euphoria, but in both cases I think it’s at least plausible there are associated sensations. Second, to me, even the purest moments of simple joy seem to be sensations in themselves, and I don’t know if there’s any conscious experience without sensations.

Humphrey theorises that the evolutionary impulse for conscious sensations includes (1) the development of a sense of self (2) which in turn allows for a sense of other, and theory of mind. He thinks that mere unconscious perception can’t be reasoned about or used to model others because, being unconscious, it is inaccessible by the global workspace for that kind of use. in contrast, conscious sensations are accessible in the global workspace and can be used to imagine the past, future, or what others are experiencing. The cognitive and sensory empathy that allows can enable an organism to behave socially, to engage in deceit or control, to more effectively care for another, to anticipate what a predator can and can’t see, etc.

I would add that conscious sensation allows for more abstract processing of sensations, which enables tool use and other complex planning like long term planning in order to get the future self more pleasurable sensations. Humphrey doesn’t talk about that much, perhaps because it’s only a small subset of conscious species that have been observed doing those things, so perhaps mere consciousness isn’t sufficient to engage in them (some would argue you need language to do good long term planning and complex abstraction).

Humphrey believes that mammals in general do engage in play, which he thinks all (but not only) conscious animals do, and that they also engage in sensation-seeking (e.g. sliding down slopes or moving fast through the air for no reason), which he thinks only (but not all) conscious animals do. And he’d say the same thing about birds, and the fact that those behaviors’ distribution over species lines up nicely with the species with neural structures he thinks generates consciousness he treats as additional confirmation of his theory.

Animals do engage in play with unpleasant experiences, e.g., playfighting can include moderately unpleasant sensations. I suppose the benefits of those experiences being conscious might be to form more sophisticated strategies of avoiding them in future. It isn’t that Humphrey thinks play is necessary for conscious to emerge, it’s that he thinks all conscious animals are motivated to engage in play.

I feel this last answer maybe hasn’t answered all your questions but I was a bit confused by your last paragraph, which might have arisen out of an understandable misunderstanding of the claim about consciousness and play.

• Humphrey’s argument fish aren’t conscious doesn’t only rest on their not having the requisite brain structures, because as you say, it is possible consciousness could have developed in their own structures in ways that are simply distinct from our own. But then, Humphrey would ask, if they have visual sensations, why are they uninterested in play? When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you’ve learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations but not a propensity to play in a way that helps them to learn about those sensations.

Perhaps fish just don’t benefit from learning more about their visual sensations. The sensations are adaptive, but learning about them confers no additional adaptive advantage. That seems a stretch to me, because it’s hard for me to imagine sensations being adaptive without learning and experimenting with them conferring additional advantage.

You could also respond by citing examples where fish can play, and are motivated to sensation-seek, as you already have, and I think if Humphrey believes your examples he would find that persuasive evidence about those organisms consciousness.

• I feel like I should be writing and reading posts about AI but honestly I am too intimidated to go near that topic.

• I tend to think that questions about which organisms or systems are conscious mostly depend on identifying the physical correlates of consciousness, and understanding how they work as a system, and that questions about panpsychism, illusionism, eliminativism, or even Calmer’s Hard Problem don’t bear on this question very much. I think there’s probably still a place for that philosophical debate because (1) there might be implications about where to look for the physical systems and (2) as I said to Michael earlier, illusionism might change our perspective on whether we assign special moral value to conscious experience at all. But I think (1) is marginal, and (2) is sort of a long shot.

In contrast, I think empirical and scientific investigation can help us understand a lot about which systems are conscious, and about what sort of conscious experiences they have, so I think most morally cruxy questions of consciousness are scientific and empirical.

Consequently, I wasn’t too bothered by Humphrey side-stepping this issue, although I basically agree he did, because he offered solid theory and empirical investigation that suggests further empirical tests that might help us make progress on understanding consciousness in animals and other systems.

The D.B. case is also an interesting one—I don’t see why it isn’t plausible to imagine that the operation (and similarly split brain cases as documented by Nagel) might lead to a second vestige of consciousness as cut off from you as that of your family, friends, and coworkers. Except this fragment doesn’t have control of motor or speach functions, how horrible! It can only pass information onto the ‘dominant’ one.

That was my reaction when I first read about split-brain patients. I now doubt it’s all that horrible. First, there’s been plenty of research of split-brain patients and I don’t think anyone has discovered signs of distress from split-halfs that are cut off from speech expression; those halves do have other ways of communicating, e.g., through signs. Second, in humans, much of distress is governed physiologically, and so (1) we would be able to detect physiological signs of stress, but more importantly (2) even if there’s a conscious half of a split-brain which can’t express itself, its mood-state might be normal because it shares a body with the other half, and so the two jointly set mood, and the system overall might not be in distress. Finally, even if consciousness isn’t illusory, conscious will often is; much more of our decisions are determined subconsciously than we think, and if the illusion still holds, the loss of conscious control might not even be perceived.

• say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike

I’m curious how, excluding phenomenal definitions, you define he defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.

I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s

• Not absolutely sure I’m afraid. I lent my copy of the book out to a colleague so I can’t check.

Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn’t actually say his view is an illusionist one.

Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane—there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are more specific.

And it seems like the version matters a great deal too. Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality. But if we’re just talking about an illusion that consciousness is a mysterious other worldly thing, and somehow, there really are qualia, then altruism feels like a meaningful life project to adopt.

On the whole having read Humphrey’s book, I don’t think he explicitly said he was an illusionist. but perhaps his theory suggests it, I’m not sure. He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.

• Thanks Michael. For readers who are confused by my post but still want to know more, consider just reading (2), which is a very good précis by Nick Humphrey of his book which I tried to summarize. It might be better for readers, rather than reading my essay, to just read that.