Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
I don’t think this is really engaging with what I said/should be a reply to my comment.
he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities
Ah, reading that, yeah this wouldn’t be obvious to everyone.
But here’s my view, which I’m fairly sure is also eliezer’s view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don’t think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-threatening way, so I try to destroy the part of your infrastructure that you’re using to do this, and then you respond to that by escalating to a nuclear exchange, then it is not accurate to say that it was me who caused the nuclear war.
Now, if you think I have a disingenuous reason to treat your activity as threatening even though I know it actually isn’t (which is an accusation people often throw at openai, and it might be true in openai’s case), that you tried to negotiate a safer alternative, but I refused that option, and that I was really essentially just demanding that you cede power, then you could go ahead and escalate to a nuclear exchange and it would be my fault.
But I’ve never seen anyone ever accuse, let alone argue competently, that Eliezer believes those things for disingenuous powerseeking reasons. (I think I’ve seen some tweets that implied that it was a grift for funding his institute, but I honestly don’t know how a person believes that, but even if it were the case, I don’t think Eliezer would consider funding MIRI to be worth nuclear war for him.)
Well it may interest you to know that the above link is about a novel negotiation training game that I released recently. Though I think it’s still quite unpolished, it’s likely to see further development. You should probably look at it.
There’s value in talking about the non-parallels, but I don’t think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don’t think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn’t real reasoning, and generally shouldn’t be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you’re facing a situation that’s pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.
There really are a lot of people in the real world who reason analogically. It’s possible that Eliezer was partially writing for them, someone has to, but I don’t think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.
Saw this on manifund. Very interested. Question, have you noticed any need for negotiation training here? I would expect some, because disagreements about the facts are usually a veiled proxy battle for disagreements about values, and,
I would expect it to be impossible to address the root cause of the disagreement without acknowledging the value difference, and even after agreeing about the facts, I’d expect people to keep disagreeing about actions or policies until a mutually agreeably fair compromise has been drawn up (the negotiation problem has been solved).
But you could say that agreeing about the facts is a pre-requisite to reaching a fair compromise. I believe this is true. Preference aggregation requires utility normalization which requires agreement about the outcome distribution. But how do we explain that to people in english?
I was also curious about this. All I can see is:
Males mature rapidly, and spend their time waiting and eating nearby vegetation and the nectar of flowers
They might be pollinators. I doubt the screwfly:bee ratio is high, but it’s conceivable that there are some plants that only they pollinate? But not likely, as I’m guessing screwfly population probably fluctuates a lot, a plant would do better to not depend on them?
I see. I glossed it as the variant I considered to be more relevant to the firmi question, but on reflection I’m not totally sure the aestivation hypothesis is all that relevant to the firmi question either… (I expect that there is visible activity a civ could do prior to the cooling of the universe to either prepare for it or accelerate it).
I don’t think the point of running them is to create exact copies, usually it would be to develop statistics about the possible outcomes, or to watch histories like your own. The distribution of outcomes for a bunch of fictional but era-appropriate generated humans may end up being roughly the same as the distribution of outcomes for the same exact population but with random perturbations along the way.
Yeah.
There’s also the possibility that computation could be more efficient in quiet regimes
The aestivation hypothesis was refuted by gwern as soon as it was posted and then again by charles bennet and robin hanson. Afaik, the argument was simple: being able to do stuff later doesn’t create a disincentive from doing visible stuff now. Cold computing isn’t relevant to the firmi hypothesis.
But yes, the argument outlined in Section 3 was limited to “base reality” scenarios.
Huh, so I guess this could be one of the very rare situations where I think it’s important to acknowledge the simulation argument, because assuming it’s false could force you to reach implausible conclusions about techno-eschatology. Though I can’t see a practical need to be right about techno-eschatology, that kind of thing is an intrinsic preference.
For example, the strategic situation and motives in quiet expansionist scenarios would plausibly be more concerned with potential adversaries from elsewhere, and civs in such scenarios may thus be significantly more inclined to simulate the developmental trajectories of potential adversaries from elsewhere.
I haven’t been able to think of a lot of reasons a civ would simulate nature beyond intrinsic curiosity. That’s a good one (another one I periodically consider and then cringe from has to do with trade deals with misaligned singletons). Intrinsic curiosity would be a pretty dominant reason to do nature/history sims among life-descended species though.
I think the average quiet regime is more likely to just not ever do large scale industry. If you have an organization whose mission was to maintain a low activity condition for a million years, there are organizational tendencies to invent reasons to continue maintaining those conditions (though maybe those don’t matter as much in high tech conditions where cultural drift can be prevented?), or it’s likely that they were maintaining those conditions because the conditions were just always the goal. For instance, if they had constitutionalised conservationism as a core value, holding even the dead dust of mars sacred.
Refuting 3: Life/history simulations under visible/grabby civs would far outnumber natural origin civs under quiet regimes.
VNM Utility is the thing that people actually pursue and care about. If wellbeing is distinct from that, then wellbeing is the wrong thing for society to be optimizing. I think this actually is the case. Harsanyi, and myself, are preference utilitarians. Singer and Parfit seem to be something else. I believe they were wrong about something quite foundational. Writing about this properly is extremely difficult and I can understand why no one has done it and I don’t know when I’ll ever get around to it.
optimizing for AI safety, such as by constraining AIs, might impair their welfare
This point doesn’t hold up imo. Constrainment isn’t a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.
If you’re trying to keep something that’s smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it’s not going to stay stuck in the box for very long. I also struggle to see a way of constraining it that wouldn’t also make it much much less useful, so in the face of competitive pressures this practice wouldn’t be able to continue.
Despite being a panpsychist, I rate it fairly low. I don’t see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.
seem to deny that the object went into the water and moved in the water
Did you notice that there are moments where it goes most of the way invisible over the land too? Also, when it supposedly goes under the water, it doesn’t move vertically at all? (So in order to be going underwater it would have to be veering exactly away and towards the camera)
So I interpret that to be the cold side of the lantern being blown to obscure the warm side.
they still seem to move together in “fixed” unison
They all answer to the wind, and the wind is somewhat unitary.
this comment
Yeah, I saw that. Some people said some things indeed. Although I do think it’s remarkable how many people are saying such things, and none of them ever looked like liars to me, I remind people to bear in mind the absolute scale of the internet and how many kinds of people it contains and how comment ranking works. Even if only the tiniest fraction of people would tell a lie that lame, a tiny fraction of the united states is thousands of people, and most of those people are going to turn up, and only the most convincing writing will be upvoted.
Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It’s mundane. All I really needed to hear was “the IR camera was on a plane”, which then calls into question the assumption that it’s moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed.
And I’d agree with this member’s take that the NYC 2010 one looks like balloons that were initially tethered coming apart.
The sao paulo video is interesting though, I hadn’t seen that before.
My fav videos are dadsfriend films a hovering black triangle (could have been faked with some drones but I still like it) and the Nellis Air Range footage. But I’ve seen so many videos debunked that I don’t put much stock in these.
You would probably enjoy my UFO notes, I see (fairly) mundane explanations a lot of the other stuff too. So at this point, I don’t think we have compelling video evidence at all, I think all we have is a lot of people saying that they saw things that were really definitely something, and I sure do wonder why they’re all saying these things. I don’t know if we’ll ever know.
I’ve played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls “memorization” might actually be all the human brain is doing when we develop the capacity to solve them. If so, there’s a some possibility that the first real world transformative AGI will be ineligible for the prize.
Debate safety essentially is a wisdom-augmenting approach, each AI is attempting to arm the human with the wisdom to assess the arguments (or mechanisms) of the other.
I’d love to see an entry that discusses safety through debate, in a public-facing way. It’s an interesting approach that may demonstrate to people outside of the field that making progress here is tractable. Assessing debates between experts is also a pretty important skill for dealing with the geopolitics of safety, an opportunity to talk about debate in the context of AI would be valuable.
It’s also conceivable (to me at least) that some alignment approaches will put ordinary humans in the position of having to referee dueling AI debaters, bidding for their share of the cosmic endowment, and without some pretty good public communication leading up to that, that could produce outcomes that’re worse than random.
I might be the first to notice the relevance of debate to this prize, but I’m probably not the right person to write that entry (and I have a different entry planned, discussing mental enhancement under alignment, inevitably retroactively dissolving all prior justifications for racing). So, paging @Rohin Shah, @Beth Barnes, @Liav.Koren
humanities current situation could ever be concerned with this is a dream of Ivory Tower fools
It might be true that it’s impractical for most people, living today, to pay much attention to the AI situation. Most of us should just remain focused on the work that they can do on these sorts of civic, social and economic reforms. But if I’d depicted a future where these reforms of ours end up being a particularly important part of history, that would not have been honest.
Situationist theory: The meat eater grinds to shine for the same reason gentry with servants do; a kind of latent guilt, to be reminded every day that so much has been sacrificed for them, a noblesse oblige, a visceral pressure to produce feats that vindicate the decadence of their station. (Having dedicated tutors may do a bit of this as well.)
A theory like this would explain why it doesn’t seem to be a result of missing nutrients, contending that it’s psychosocial.
[just having a quick look at George Church]. Said there he’s “off and on vegan” which suggests to me that he was having difficulty getting it to work. But I checked his twitter and he said he was vegan as of 2018. He studies healthspan, so his voice counts. His page on his personal site unfortunately doesn’t discuss his approach to dieting or supplements but maybe he’d link something from someone else if someone asked.
Probably not, because it’s not really important for the two systems to be integrated. You can (or should be able to) link/embed a manifold from a community note. If the community notes process doesn’t respect or doesn’t investigate prediction markets closely enough already. Adding a feature to twitter wouldn’t accelerate that by much?
Usually it’s beneficial for different systems to have a single shared account system so that there isn’t a barrier in the way of people interacting with the other system, but manifold is not direly in need of a twitter-sized userbase. Its userbase is large and energetic enough to produce accurate enough estimates already.
(personally, I think a more interesting question is whether manifold should try to replicate general twitter/reddit functionality :p)
A much cheaper and less dangerous approach: Just don’t delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we’re better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn’t guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they’ll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).