Kudos btw for writing this. Consciousness is a topic where it can be really hard to make progress and I worry that people aren’t posting enough about it for fear of saying something wrong.
SoerenMind
I agree that physical theories of consciousness are pan psychist if they say that every recurrent net is conscious (or that everything that can be described as GWT is conscious). The main caveats for me are:
Does anyone really claim that every recurrent net is conscious? It seems so implausible. E.g. if I initialize my net with random parameters, it just computes garbage. Or if I have a net with 1 parameter it seems too simple. Or if the number of iterations is 1 (as you say), it’s just a trivial case of recurrence. Or if it doesn’t do any interesting task, such as prediction...
(Also, most recurrent nets in nature would be gerrymandered. I could imagine there are enough that aren’t though, such as potentially your examples).
NB, recurrence doesn’t necessarily imply recurrent processing (the term from recurrent processing theory). The ‘processing’ part could hide a bunch of complexity?
I like your description of how complex physical processes like global attention / GWT to simple ones like feedforward nets.
But I don’t see how this implies that e.g. GWT reduces to panpsychism. E.g. to describe a recurrent net as a feedforward net you need a ridiculous number of parameters (with the same parameter values in each layer). So that doesn’t imply that the universe is full of recurrent nets (even if it were full of feedforward nets which it isn’t).
To draw a caricature of your argument as I understand it: It turns out computers can be reduced to logic gates. Therefore, everything is a computer.
Or another caricature: Recurrent nets are a special case of {any arrangement of atoms}. Therefore any arrangement of atoms is an RNN.
edit: missing word
Your link goes to the UK version. Here’s US:
Just as a data point, “eye clear” took off for the conference ICLR so people seem to find the “clear” pronunciation intuitive.
Thanks for writing this. I don’t have a solution but I’m just registering that I would expect plenty of rejected applicants to feel alienated from the EA community despite this post.
It’s just an informal way to say that we’re probably typical observers. It’s named after Copernicus because he found that the Earth isn’t as special as people thought.
Very nice list!
Great work!!!
Hmmm isn’t the argument still pretty broadly applicable and useful despite the exceptions?
If you want a single source, I find the 80000 hours key ideas page and everything it links to quite comprehensive and well written.
Like most commenters, I broadly agree with the empirical info here. It’s sort of obvious, but telling others things like “don’t go out of your way to use less plastic” or even just creating unnecessary waste in a social situation can be inconsiderate towards people’s sensibilities. Of course, this post advocates no such thing but I want to be sure nobody goes away thinking these things are necessarily OK.
(I was recently reminded of a CEA research article about how considerateness is even more important than most people think, and EAs should be especially careful because their behavior reflects on the whole community.)
On second thoughts, I think it’s worth clarifying that my claim is still true even though yours is important in its own right. On Gott’s reasoning, P(high influence | world has 2^N times the # of people who’ve already lived) is still just 2^-N (that’s 2^-(N-1) if summed over all k>=N). As you said, these tiny probabilities are balanced out by asymptotically infinite impact.
I’ll write up a separate objection to that claim but first a clarifying question: Why do you call Gott’s conditional probability a prior? Isn’t it more of a likelihood? In my model it should be combined with a prior P(number of people the world has). The resulting posterior is then the prior for further enquiries.
Interesting point!
The diverging series seems to be a version of the St Petersburg paradox, which has fooled me before. In the original version, you have a 2^-k chance of winning 2^k for every positive integer k, which leads to infinite expected payoff. One way in which it’s brittle is that, as you say, the payoff is quite limited if we have some upper bound on the size of the population. Two other mathematical ways are 1) if the payoff is just 1.99^k or 2) if it is 2^0.99k.
If you’re just presenting a prior I agree that you’ve not conditioned on an observation “we’re very early”. But to the extent that your reasoning says there’s a non-trivial probability of [we have extremely high influence over a big future], you do condition on some observation of that kind. In fact, it would seem weird if any Copernican prior could give non-trivial mass to that proposition without an additional observation.
I continue my response here because the rest is more suitable as a higher-level comment.
On your prior,
P(high influence) isn’t tiny. But if I understand correctly, that’s just because
P(high influence | short future) isn’t tiny whereas
P(high influence | long future) is still tiny. (I haven’t checked the math, correct me if I’m wrong).
So your argument doesn’t seems to save existential risk work. The only way to get a non-trivial P(high influence | long future) with your prior seems to be by conditioning on an additional observation “we’re extremely early”. As I argued here, that’s somewhat sketchy to do.
- Sep 18, 2019, 1:32 PM; 1 point) 's comment on Are we living at the most influential time in history? by (
So your prior says, unlike Will’s, that there are non-trivial probabilities of very early lock-in. That seems plausible and important. But it seems to me that your analysis not only uses a different prior but also conditions on “we live extremely early” which I think is problematic.
Will argues that it’s very weird we seem to be at an extremely hingy time. So we should discount that possibility. You say that we’re living at an extremely early time and it’s not weird for early times to be hingy. I imagine Will’s response would be “it’s very weird we seem to be living at an extremely early time then” (and it’s doubly weird if it implies we live in an extremely hingy time).
If living at an early time implies something that is extremely unlikely a priori for a random person from the timeline, then there should be an explanation. These 3 explanations seem exhaustive:
1) We’re extremely lucky.
2) We aren’t actually early: E.g. we’re in a simulation or the future is short. (The latter doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class).
3) Early people don’t actually have outsized influence: E.g. the hazard/hinge rate in your model is low (perhaps 1/N where N is the length of the future). In a Bayesian graphical model, there should be a strong update in favor of low hinge rates after observing that we live very early (unless another explanation is likely a priori).
Both 2) and 3) seem somewhat plausible a priori so it seems we don’t need to assume that a big coincidence explains how early we live.
- Sep 18, 2019, 1:26 PM; 6 points) 's comment on Are we living at the most influential time in history? by (
- Sep 19, 2019, 3:33 PM; 2 points) 's comment on Are we living at the most influential time in history? by (
This sounds really cool. Will have to read properly later. How would you recommend a time pressured reader to go through this? Are you planning a summary?
Just registering that I’m not convinced this justifies the title.
This was also discussed on LessWrong:
https://www.lesswrong.com/posts/J8JvZxkABWwdXSH9d/is-the-coronavirus-the-most-important-thing-to-be-focusing