Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
(this is partially echoing/paraphrasing lukeprog) I want to emphasize the anthropic measure/phenomenology (never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it’s discussed more in the full document, but for most of the post it’s ignored.
Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they’re going to have to decide whether to leave the light on or off. They know that one of the robots hates the dark, but the other strongly prefers it.
The robot who prefers the dark also happens to be running on 1000 redundant server instances having their outputs majority-voted together to maximize determinism and repeatability of experiments or something. The robot who prefers the light happens to be running on just one server.
The dark-prefering robot doesn’t even know about its redundancy, it doesn’t lead it to report any more intensity of experience. There is no report, but it’s obvious that the dark-preferring robot is having its experience magnified by a thousand times, because it is exactly as if there are a thousand of them, having that same experience of being in a lit room, even though they don’t know about each other.
You turn the light off before you go.
Making some assumptions about how the brain distributes the processing of suffering, which we’re not completely sure of, but which seem more likely than not, we should have some expectation that neuron count has the same anthropic boosting effect.
it’s not clear to me that that is the assumption of most
Thinking that much about anthropics will be common within the movement, at least.
Since we’re already in existential danger due to AI risk, it’s not obvious that we shouldn’t read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:
If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we’d know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I’m not sure, it’s a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?
I believe the forum allows commenting anonymously, though I wouldn’t know how to access that feature.
Psuedonyms would be a bit better, but it’ll do.
I’m excited by the prospect of Polis, but it’s frustratingly limited. The system has no notion of whether people are agreeing with a statement because it’s convincing or bridging the gap, or because it’s banal.
In this case… I don’t think we’re really undergoing any factionalization about this? In that case, should we not just try talking more… that usually works pretty well with us.
I guess prediction markets will help.
Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.
I’ve raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we’d sort according to a prediction market’s expectation of the eventual ratio between positive and negative reports from readers). They say they’ve thought about it.
Although I cheer for this,
What makes EA, EA, what makes EA antifragile, is its ruthless transparency
- although I really want to move to a world where radical transparency wins, I currently don’t believe that we’re in a world like that right now (I wish I could explain why I think that without immediately being punished for excess transparency, but for obvious reasons that seems impossible).
How do we get to that world? Or if you see this world in better light than I do, if you believe that the world is already mostly managing to avoid punishing important true ideas, what’re the dynamics that preserve and promote that?
You might try to explain it away
I wouldn’t, I didn’t realize they were recognizing new saints! That’s quite surprising and I can’t see why they’d do it unless they believed it was correct.
Trying to rationalise Christian belief as ‘well I guess they must be compatibilist deists’
I will persist with this a bit though, there must be an extent of compatibilist deism, given the extent to which the world was obviously and visibly set up to plausibly work in an autonomous way and the extent to which most of the catholics I know are deeply interested in science, believe in evolution, etc, they know how much how many of these machines drive themselves (although they might draw the line at the brain). They may believe in ongoing miracles, but they know that the miracles are not the norm, and they must wonder why.
which is unhelpful because nobody (not even the Calvinists!) thinks providence is incompatible with agency
Mostly I was just trying to derive, in my odd way, that they wouldn’t. But if that’s common knowledge yeah it might not have been helpful.
And divine providence cannot just mean that the deist god set everything up just right in the beginning such that everything just worked out as planned … Your model, I think, is incompatible with Christian dogma
Mm, that is my relationship with nature. I’d heard that there were deists in the christian world (I think there still are?) so I didn’t realize it was incompatible with Christian dogma as it is carried.
And I guess… personally… I don’t understand how very many people could sustain a perception of the world as a place that is subject to ongoing divine intervention so I’m surprised if it’s not common. If there are and were interventions, a lot of them must consist of measures to keep people like me from getting to see any sign of them (and I think about that a lot)
Could you unpack “Compatibilists all deny that impersonal determinism is at all analogous to some agent intervening in the causal structure (this is part of what it means to be a compatibilist)” a bit?
If you want to call this position a ‘pre-compatibilist confusion’ -
I… think I probably don’t
You partly acknowledge this, but I really think there’s probably a bit of pre-compatabilist confusion going on here. Knowing with certainty that you will succeed at preventing catastrophic risks does not excuse you from working hard to do it. Prophesies do not undermine agency, they include it, they are about it, they are realized through and in accordance with the agency of their subjects.
Consider this: When I was writing this comment, I was absolutely certain that I would finish writing the comment, and that it would be posted. It wouldn’t be in spite of my agency but because of it.
From what I understand, it’s impossible to really digest the existence of humanity under the existence of a god that is responsible for everything that happens, without developing this providence-agency compatibalism, I really would expect it to be a majority assumption?
I think there’s another thing you’re identifying which I’d agree is a complication: Secular longtermist models never actually give us a certain prophesy of success even assuming the cooperation of the agency of humanity, that is not our prophesy. We aren’t expecting to reach 0 risk of failure.
But I don’t know that the metaphysics patch that resolves the conflict is particularly messy… I have something for it… but I’m not quite a christian, so I’ll refrain from suggesting a patch unless asked.
I’m dubious that EAs younger than about 40 would end up being anything more than pawns in political games they don’t understand
Can’t disagree, only 32, still don’t fully understand how american politics works.
the name of this video’s sponsor is 80,000 hours
Oh. I was really hoping Veritasium had just organically gotten interested in differential progress, that’s kind of a let down lmao.
I really want to thank 80,000 hours for sponsoring this part of the video
Alternately, maybe he just wanted to save the 80K ad for the video that would be most watched by the audience who’d be interested in 80K.
I definitely don’t spend 2 hours a day scrolling facebook, though I may spend about that long scrolling twitter (mostly miserably but occaisonally I see something really useful).
I think I’d do that even if there were no algorithm, though. There isn’t one in my twitter list of consistently good accounts, nor in mastodon, I still check these things often, they are not much less juicy.
People often say that twitter was designed to be addictive. It mostly wasn’t designed at all. It was selected. And most of that “addiction” is just a craving for a thriving social space online.
in which a minor slip-up means instant death for everyone so a 1 – epsilon probability of success is unacceptable.
Oh, does Eliezer still think (speak?) that way? I think that would be the first clear reasoning error (that can’t just be written off as a sort of opinionated specialization) I’ve seen him make about AI strategy. In a situation where there’s a certain yearly baseline risk of the deployment of misaligned AGI occurring (which is currently quite low and so this wouldn’t be active yet!), it does actually become acceptable to deploy a system that has a well estimated risk of being misaligned. Techniques that only have a decent chance of working is actually useful and should be collected enthusiastically.
I don’t know that he is still taking a zero risk policy, I’ve been seeing a lot more “no it will almost certainly be misaligned” recently, but it could have given rise to a lot of erroneous inferences.
I didn’t realize how many mid posts the algorithm has been curating out for me… :{ I didn’t finish scrolling. Felt inefficient.
Broad input (low production-quality) narrow output (extensively filtered by extended curation systems) is probably the main reason memes were ever considered to be good. Without curation, it’s… well it’s almost literally not “memes” at that point, as they’re not doing the thing where they propagate and reproduce and compete.
With a ‘select all’ format, one loses the information about which are the most important
Have you found that people answer that way? I’ll only tend to answer with more than one option if they’re all about equally important.
You might expect that it’s uncommon for multiple factors to be equally important, I think one of the reasons it is common, in the messy reality that we have (which is not the reality that most statisticians want): multiple factors are often crucial dependencies.
Example: A person who spends a lot of their political energy advocating for Quadratic Funding (a democratic way of deciding how public funding is allocated) cannot be said to be more statist than they are libertarian, or vice versa, because the concept of QF just wouldn’t exist and cannot be advocated without both schools of thought, there may be ways of quantifying the role of each school in its invention, but they’re arbitrary (you probably don’t want to end up just measuring which arbitrary quantifications of qualitative dependencies respondants might have in mind today) The concept rests on principles from both schools, to ask which is more important to them is like asking whether having skin is more important to an animal than having blood.
Well, in one sense that is shallow, what would an agnostic person + (some other religion mean)?
Uh that specifically? Engaging in practices and being open to the existence of the divine but ultimately not being convinced. This is not actually a strange or uncommon position. (What if there are a lot of statisticians who are trying to make their work easier by asking questions that make the world seem simpler than it is.)
it seems like some religions like Buddhism, which accepts other practices, would be understood to accept other practices [but not believe in them or practice them?]
That just sounds like a totally bizarre way to answer the question as I understood it (and possibly as it was stated, I don’t remember the details). I wouldn’t expect a buddhist with no other affiliations to answer that way. I don’t believe the ambiguity is there.
I think one consideration is that they want to make the surveys comparable year to year
Makes sense. But I guess if it’s only been one year, there wouldn’t have been much of a cost to changing it this year, or, the cost would have been smaller than the cost of not having it right in future years.
if someone could select different political identities or religions, that would make the result difficult to interpret
Could you explain why? I don’t see why it should, really.
WAY too many of the questions only allow checking a single box, or a limited number of boxes. I’m not sure why you’ve done this? From my perspective it almost never seems like the right thing, and it’s going to significantly reduce the accuracy of the measurements you get, at least from me.
An example would be, there’s a question, “what is the main type of impact you expect to have” or something, and I expect to do things that are entrepreneurial, which involve or largely consist of communitybuilding, communication and research. I don’t know which of those for impact types are going to be the largest (it’s not even possible to assess that, and I’m not sure it’s a meaningful question considering that the impacts are often dependent on more than one of those factors at the same time: We can’t blame any one factor), but even if I did know how to assess that, the second place category might have similar amounts of impact as first, meaning that by only asking for the peak, you’re losing most of the distribution.
Other examples, which are especially galling, are the question about religious or political identity. The notion that people can only adhere to one religion is actually an invention of monotheist abrahamic traditions, arguably a highly spiritually corrosive assumption, I’m not positioned to argue that, but the survey shouldn’t be imposing monotheistic assumptions.
The idea that most EAs would have a simple political identity or political theory is outright strange to me. Have you never actually seen EAs discussing politics? Do you think people should have discrete political identities? I think having a discrete, easily classifiable political identity is pretty socially corrosive as well and shouldn’t be imposed by the survey! (although maybe an ‘other’ or ‘misc’ category is enough here. People with mixed political identities tend not to be big fans of political identity in general.)
The media is an extremely different discursive environment than the EA forum and should have different guidelines.
I don’t want to assume that the public sphere cannot become earnestly truthseeking, but right now it isn’t at all and bad things happen if you treat it like it is.