Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
I don’t think atproto is really a well designed protocol
No private records yet, so can’t really build anything you’d wanna live in on it.
Would an agenty person respond to this situation by taking atproto and inventing their own private record extension for it and then waiting for atproto to catch up with them? Maybe. But also:
The use of dns instead of content-addressing for record names is really ugly, since they’re already using content-addressing elsewhere, so using dns is just making it hard to make schema resolution resilient, and it prohibits people who aren’t sysadmins from publishing schemas. (currently only sysadmin types could need a schema published, but it’s lame that that’s the case. Anyone should be able to publish data. Anyone can define a type.) In theory you can still do this stuff (provide a special domain that’s an ipfs gateway or something and treat records that use that as having content-addressed specs), but at that point it seems like you’re admitting that atproto was mostly just a mistake?
The schema system isn’t a good type system (lacks generics). They probably don’t think of it as a type system (but it is, and should be).
And the ecosystem currently has nothing to offer, not really
Would anyone benefit from integrating closely with bsky?
There are people who’ve set up their blogs so that bsky replies to links to their post show up in the comments, for instance, but I’d guess most actual bloggers should (and perhaps already do) actively not want to display the replies from character limited forums, because, you know, they tend to be dismissive, not cite sources, not really be looking for genuine conversation, etc.
You could use a person’s bsky following as a general purpose seed list for comment moderation I guess. But with a transitive allow list this isn’t really needed.
I don’t expect meaningful amounts of algorithmic choice to just happen. I’d guess that training good sequential recommender systems is expensive in multiple ways? So if users don’t have a means (or a culture) of paying for recommenders there wont be an ecosystem, so it’ll just be bsky’s (last I checked, bad) algorithm and maybe one or two others.
An aside, I looked at margin.at, which is doing the annotations everywhere thing. But it seems to have no moderation system, doesn’t allow replies to annotations, doesn’t even allow editing or deleting your annotations right now. Why is this being built as a separate system with its own half-baked comment component instead of embedding an existing high quality discussion system from elsewhere in the atmosphere? Because atproto isn’t the kind of protocol that even aspires that level of composability and also because nothing in the ecosystem as it stands has a good discussion system.
Yeah, I feel for the first time founders, who idealistically wish that this part of the problem didn’t so much exist. It oughtn’t, afaict.
Browser extensions are almost[1] never widely adopted.
Whenever anyone reminds me of this by proposing the annotations everywhere concept again, I remember that the root of the problem is distribution. You can propose it, you can even build it, but it wont be delivered to people. It should be. There are ways of designing computers/a better web where rollout would just happen.
That’s what I want to build.
Software mostly isn’t extensible, or where it is, it’s not extensible enough (even web browsers aren’t as extensible as they need to be! Chrome have started sabotaging adblock btw!!). The extensions aren’t managed collectively (Chrome would block any such proposal under the pretence that it’s a security risk), so features that are only useful if everyone has them just can’t come into existence. We continue to design under the assumption that ordinary people are supposed to know what they want before they’ve tried it.
There are underlying reasons for this: There isn’t a flexible shared data model that app components can all communicate through, so there’s a limit to what can be built, and how extensible any app can be. Currently, no platform supports sandboxed embedded/integrated components well.
So I started work there.
And then that led to the realization that there is no high level programming language that would be directly compatible with the ideal data model/type system for a composable web (mainly because none of them handle field name collision), so that’s where we’re at now, programming language design[2]. We also kinda need to do a programming language due to various shortcomings in wasm iirc.
But the adoption pathway is, make better apps for all of the core/serious/actually good things people do with the internet (blogging, social feeds, chat, reddit, wiki, notetaking stuff) (I already wanted to do this), make it crawlable for search engines, get people to transition to this other web that’s much more extensible in the same way they’d transition to any new social network.
And then features like this can just grow.
Well, I just checked, apparently like 30% of internet users use ad blockers, that’s shockingly hearteningly high, even mobile adoption is only half that. On the other hand, that’s just ad blockers, and 30% isn’t that good for something with universal appeal that’s essentially been advertised for 30 years straight.
It initially seemed like LLM coding might make it harder to launch new programming languages, but nothing worked out the way people were expecting and I think they actually make it way easier. They can write your vscode integration, they can port libraries from other languages, they help people to learn the new language/completely bypass the need to learn the language by letting users code in english then translating it for them.
A much cheaper and less dangerous approach: Just don’t delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we’re better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn’t guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they’ll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).
I don’t think this is really engaging with what I said/should be a reply to my comment.
he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities
Ah, reading that, yeah this wouldn’t be obvious to everyone.
But here’s my view, which I’m fairly sure is also eliezer’s view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don’t think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-threatening way, so I try to destroy the part of your infrastructure that you’re using to do this, and then you respond to that by escalating to a nuclear exchange, then it is not accurate to say that it was me who caused the nuclear war.
Now, if you think I have a disingenuous reason to treat your activity as threatening even though I know it actually isn’t (which is an accusation people often throw at openai, and it might be true in openai’s case), that you tried to negotiate a safer alternative, but I refused that option, and that I was really essentially just demanding that you cede power, then you could go ahead and escalate to a nuclear exchange and it would be my fault.
But I’ve never seen anyone ever accuse, let alone argue competently, that Eliezer believes those things for disingenuous powerseeking reasons. (I think I’ve seen some tweets that implied that it was a grift for funding his institute, but I honestly don’t know how a person believes that, but even if it were the case, I don’t think Eliezer would consider funding MIRI to be worth nuclear war for him.)
Well it may interest you to know that the above link is about a novel negotiation training game that I released recently. Though I think it’s still quite unpolished, it’s likely to see further development. You should probably look at it.
There’s value in talking about the non-parallels, but I don’t think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don’t think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn’t real reasoning, and generally shouldn’t be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you’re facing a situation that’s pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.
There really are a lot of people in the real world who reason analogically. It’s possible that Eliezer was partially writing for them, someone has to, but I don’t think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.
Saw this on manifund. Very interested. Question, have you noticed any need for negotiation training here? I would expect some, because disagreements about the facts are usually a veiled proxy battle for disagreements about values, and,
I would expect it to be impossible to address the root cause of the disagreement without acknowledging the value difference, and even after agreeing about the facts, I’d expect people to keep disagreeing about actions or policies until a mutually agreeably fair compromise has been drawn up (the negotiation problem has been solved).
But you could say that agreeing about the facts is a pre-requisite to reaching a fair compromise. I believe this is true. Preference aggregation requires utility normalization which requires agreement about the outcome distribution. But how do we explain that to people in english?
I was also curious about this. All I can see is:
Males mature rapidly, and spend their time waiting and eating nearby vegetation and the nectar of flowers
They might be pollinators. I doubt the screwfly:bee ratio is high, but it’s conceivable that there are some plants that only they pollinate? But not likely, as I’m guessing screwfly population probably fluctuates a lot, a plant would do better to not depend on them?
I see. I glossed it as the variant I considered to be more relevant to the firmi question, but on reflection I’m not totally sure the aestivation hypothesis is all that relevant to the firmi question either… (I expect that there is visible activity a civ could do prior to the cooling of the universe to either prepare for it or accelerate it).
I don’t think the point of running them is to create exact copies, usually it would be to develop statistics about the possible outcomes, or to watch histories like your own. The distribution of outcomes for a bunch of fictional but era-appropriate generated humans may end up being roughly the same as the distribution of outcomes for the same exact population but with random perturbations along the way.
Yeah.
There’s also the possibility that computation could be more efficient in quiet regimes
The aestivation hypothesis was refuted by gwern as soon as it was posted and then again by charles bennet and robin hanson. Afaik, the argument was simple: being able to do stuff later doesn’t create a disincentive from doing visible stuff now. Cold computing isn’t relevant to the firmi hypothesis.
But yes, the argument outlined in Section 3 was limited to “base reality” scenarios.
Huh, so I guess this could be one of the very rare situations where I think it’s important to acknowledge the simulation argument, because assuming it’s false could force you to reach implausible conclusions about techno-eschatology. Though I can’t see a practical need to be right about techno-eschatology, that kind of thing is an intrinsic preference.
For example, the strategic situation and motives in quiet expansionist scenarios would plausibly be more concerned with potential adversaries from elsewhere, and civs in such scenarios may thus be significantly more inclined to simulate the developmental trajectories of potential adversaries from elsewhere.
I haven’t been able to think of a lot of reasons a civ would simulate nature beyond intrinsic curiosity. That’s a good one (another one I periodically consider and then cringe from has to do with trade deals with misaligned singletons). Intrinsic curiosity would be a pretty dominant reason to do nature/history sims among life-descended species though.
I think the average quiet regime is more likely to just not ever do large scale industry. If you have an organization whose mission was to maintain a low activity condition for a million years, there are organizational tendencies to invent reasons to continue maintaining those conditions (though maybe those don’t matter as much in high tech conditions where cultural drift can be prevented?), or it’s likely that they were maintaining those conditions because the conditions were just always the goal. For instance, if they had constitutionalised conservationism as a core value, holding even the dead dust of mars sacred.
Refuting 3: Life/history simulations under visible/grabby civs would far outnumber natural origin civs under quiet regimes.
VNM Utility is the thing that people actually pursue and care about. If wellbeing is distinct from that, then wellbeing is the wrong thing for society to be optimizing. I think this actually is the case. Harsanyi, and myself, are preference utilitarians. Singer and Parfit seem to be something else. I believe they were wrong about something quite foundational. Writing about this properly is extremely difficult and I can understand why no one has done it and I don’t know when I’ll ever get around to it.
optimizing for AI safety, such as by constraining AIs, might impair their welfare
This point doesn’t hold up imo. Constrainment isn’t a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.
If you’re trying to keep something that’s smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it’s not going to stay stuck in the box for very long. I also struggle to see a way of constraining it that wouldn’t also make it much much less useful, so in the face of competitive pressures this practice wouldn’t be able to continue.
Despite being a panpsychist, I rate it fairly low. I don’t see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.
seem to deny that the object went into the water and moved in the water
Did you notice that there are moments where it goes most of the way invisible over the land too? Also, when it supposedly goes under the water, it doesn’t move vertically at all? (So in order to be going underwater it would have to be veering exactly away and towards the camera)
So I interpret that to be the cold side of the lantern being blown to obscure the warm side.
they still seem to move together in “fixed” unison
They all answer to the wind, and the wind is somewhat unitary.
this comment
Yeah, I saw that. Some people said some things indeed. Although I do think it’s remarkable how many people are saying such things, and none of them ever looked like liars to me, I remind people to bear in mind the absolute scale of the internet and how many kinds of people it contains and how comment ranking works. Even if only the tiniest fraction of people would tell a lie that lame, a tiny fraction of the united states is thousands of people, and most of those people are going to turn up, and only the most convincing writing will be upvoted.
Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It’s mundane. All I really needed to hear was “the IR camera was on a plane”, which then calls into question the assumption that it’s moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed.
And I’d agree with this member’s take that the NYC 2010 one looks like balloons that were initially tethered coming apart.
The sao paulo video is interesting though, I hadn’t seen that before.
My fav videos are dadsfriend films a hovering black triangle (could have been faked with some drones but I still like it) and the Nellis Air Range footage. But I’ve seen so many videos debunked that I don’t put much stock in these.
You would probably enjoy my UFO notes, I see (fairly) mundane explanations a lot of the other stuff too. So at this point, I don’t think we have compelling video evidence at all, I think all we have is a lot of people saying that they saw things that were really definitely something, and I sure do wonder why they’re all saying these things. I don’t know if we’ll ever know.
I’ve played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls “memorization” might actually be all the human brain is doing when we develop the capacity to solve them. If so, there’s a some possibility that the first real world transformative AGI will be ineligible for the prize.
Debate safety essentially is a wisdom-augmenting approach, each AI is attempting to arm the human with the wisdom to assess the arguments (or mechanisms) of the other.
I’d love to see an entry that discusses safety through debate, in a public-facing way. It’s an interesting approach that may demonstrate to people outside of the field that making progress here is tractable. Assessing debates between experts is also a pretty important skill for dealing with the geopolitics of safety, an opportunity to talk about debate in the context of AI would be valuable.
It’s also conceivable (to me at least) that some alignment approaches will put ordinary humans in the position of having to referee dueling AI debaters, bidding for their share of the cosmic endowment, and without some pretty good public communication leading up to that, that could produce outcomes that’re worse than random.
I might be the first to notice the relevance of debate to this prize, but I’m probably not the right person to write that entry (and I have a different entry planned, discussing mental enhancement under alignment, inevitably retroactively dissolving all prior justifications for racing). So, paging @Rohin Shah, @Beth Barnes, @Liav.Koren
Yes. But this ocean has actually been boiled many times before. Each of facebook, gmail, discord, X, had an opportunity to remake the internet, and they needlessly blew it or declined to attempt it. In China it’s already happened (mini-apps on wechat).
Well, it’s been built many times. Hypothes.is was the last one I tried.
One of the reasons I don’t want to build that yet is that I foresee moderation issues. Comment sections with no moderation will be annoying, people might end up deciding not to read them. Reddit style moderation isn’t particularly good either, it requires a spam-prevention approach and it requires larger crowds, to converge, which you’ll basically never have. I don’t think there are any conventional moderation systems that work here?
I wanted to use a web of trust approach, where you only see highlights prominently if they’re from your network (the people who are accountable or relevant to you). And building a web of trust isn’t necessarily easy. It benefits a lot from being integrated with other systems.
And in general the need for integration just keeps arising.
But does any of this mean you shouldn’t go ahead and do it? Probably not. I wont make the perfect the enemy of the good, though I ask that if a perfect thing is born please make sure the good wont end up being its enemy either.
The post here for me implied an approach of having LLM-generated comments there first. Presumably if it ever became popular enough to garner human comments (or human-curated comments) the prominence of the initial LLM comments could decrease naturally.
A generalization of this occurs to me; it’d be useful to show users a measure of how many other extension users have viewed particular pages, which is to say, how many people could have helped if someone had made a correction.
But yeah I think it also makes sense to start with a campaign/mass commitment with a specific demographic.