Longtermist writer, principled interactive system designer. https://aboutmako.makopool.com
Consider browsing my Lesswrong profile for interesting frontier (fringe) stuff https://www.lesswrong.com/users/makoyass
Longtermist writer, principled interactive system designer. https://aboutmako.makopool.com
Consider browsing my Lesswrong profile for interesting frontier (fringe) stuff https://www.lesswrong.com/users/makoyass
Due to its focus on statistical reasoning and the difficulty of actioning the firmi paradox in an effective altruist context (despite how interesting and probably important it is), I’ve linkposted this to lesswrong.com
I’ve been musing about a Suspension for Historically Significant Minds movement. I don’t particularly care whether I personally get suspended, I don’t think I’m important, we can only save so many of these living biographies, others are more important, I think it’s a tragedy that the most interesting biographies are currently being burned.
I’m not sure it’s reasonable to expect a fund like this to be able to act very often, though! The figures who wont pay for their own suspension usually aren’t going to be willing to accept suspension.
The people I’d want to nominate would tend to have a deep attachment to some community of the present, they would rarely think of the far future. Most of them, on receiving their invitation would think about it for 20 minutes and then trash it, out of a sense of humility, and out of a sense that accepting such a thing would look from the outside like an abandonment of their community. I would want to say to them, “No, you were selected because you are the largest portion of that community that we’re able to save.” I’m not sure whether they’d hear it.
Maybe it would help to give them additional nominations to allocate to others, so it wouldn’t just be them. A lot of them wouldn’t want to deal with the political consequences of having to make a decision like that. It would just make things messier. The dirty work of triage.
Regarding ARCHES
Contrary to some others he argues that we should perhaps never make ‘prepotent’ AI (one that cannot be controlled by humans) - not even a defensive one to prevent other AI threats.
Where’s that? I’d be very interested to see an argument to that. I looked around and found a lot of reasons prepotence is dangerous, and ways to avoid it, but wasn’t able to find an argument that it is decisively more dangerous than its negative.
(I do suspect non-prepotence is dangerous. In short: Prepotent AGI can and visibly is required to exceed us morally (not in the sense of making metaphysical moral progress, I don’t believe in that, I mean that there can be higher levels of patience, lucidity, and knowledge of the human CEV and its applications, that would bring the thing to conclusions we’d find shocking), there’s a sense in which prepotent AGI would be harder to weaponize, harder to train on a single profane object level objective and fire, it is less tempting to try to use it to do stupid rash things we will grow to regret because the consequences of using it in stupid rash regrettable ways are so much more immediately obviously irrevocable. In the longest term, building agentic infrastructures that maintain binding treaties will be a necessity for overcoming the last coordination problems, that’s another reason that prepotence is innevitable. Notably, the treaty “It should be globally impossible to make prepotent AGI” would itself manifest as a prepotent agency. The idea that prepotence is or should be avoidable might be conceptually unworkable.)
(In my skim-read, I also couldn’t find discussion of the feasibility of aligned prepotent AI, and that’s making me start to wonder if there might be a perilous squeamishness around. There are men who worry about being turned into housecats, work vainly on improving the human brain with neural meshes, so that humans can compete with AI. The reality is the housecats will be happy and good and as enfranchised as they wish to be, and human brains will not compete, and that will be fine. It’s imaginable that this aversion to prepotence comes from a regressive bravado that has been terminally outmoded since the rise of the first city-state. If I’m missing the mark on this though, apologies for premature psychologizing.)
Infinite Ethics is solved by LDT btw. The multiverse is probably infinite (I don’t know where this intuition comes from but come it does), but if so, there are infinite instances of you strewn through it, and you are effectively controlling all of them acausally. Some non-zero measure of all of that is entangled with your decisions.
A nonstandard solution I still can’t stop thinking about: give up on the impossible project of digital privacy and democratize the panopticon
I don’t think you should be so defensive in the face of accusations of promoting a bragging culture. Own it. If someone asked me “Isn’t it unethical to brag” I would tell them that, no, contrary, it’s positively ethical to brag.
The following is opinion, probably contains innacuracies, but would be important if true.
Bragging (well) about how good you are is a good norm.
If credibly signalling our goodness is normalized, there will emerge social pressures to do more goodness than we otherwise would have. If you normalize the right sort of bragging, it will creates a culture of philanthropic accountability. I sometimes wonder if the taboo against bragging might just be an artifact of abrahamic religion (if God is the final judge of the virtue of every man, there’s little need for us to judge each other, so to show high concern for the judgements of your fellow man is a sign of a lack of piety) + crab bucket mentality (I feel pissed off when the best man shows everyone how much better he is than me, I am a narcissist and cannot believe my being pissed off by that could reflect a character flaw on my part, it must be because he’s doing something genuinely bad, therefore we should agree that it’s unethical and forbid it.), I can’t see why we should need it any more. If you reign costly goodness signalling firmly under the earnest truthseeking norms of effective altruism, it could be the strongest thing we ever built. If you don’t think you can reign down these wild horses of Ra, then I would recommend that you don’t summon them.
So, I like the concept, perhaps for different reasons than your own, but I hope you’ll find my reasons convincing/refutable.
I was a little concerned about the bid sniping recommendation, bad things often happen when a technique for subverting a system and getting an edge over others is widely adopted, but it occurred to me that all that would happen is ebay auctions would become, like, one shot simultaneous blind bids, which might well be an improvement. Auction processes, currently, are selected to benefit sellers, at the detriment of buyers, and at the detriment of pricing efficiency? (I’d expect the winner’s curse to lead to overpricing), so it wouldn’t be that surprising if the adoption of bidsniping turns out to be a generally socially beneficial transition.
I can second the recommendation of instant pots. I have a crockpot express (I couldn’t get an instant pot in new zealand at a decent price. This baffles me, why does no electronics store seem to pay attention to online reviews? How do they make their import decisions?) and I use it all the time for cooking beans, rice, stew, and occasionally for raising dough (it has a low heat yogurt setting).
Regarding cast iron pans, do you know how non-stick seasoning treatment works, like on the physics level? I really need to know! The seasoning on my wok (assuming it’s essentially the same chemistry) keeps failing, it’s completely mystifying to me and I’m tired of it. Patches of it will just seemingly at random become sticky, tacky-feeling to the spatula, stuff will burn onto it. Usually right after adding rice (but, tragically, not always) the burnscum will lift off and it will be perfectly non-stick again. WHY.
I think we should emphasize that for vegans, B12 isn’t just probably good, it’s mandatory. There aren’t really any plant based sources and if you have too little for too long you will undergo severe neurological impairment. Also vegans must remember to take creatine for maximum memory function! :<
I’m glad to see the inclusion of anthropic units as a function of neuron count/brain mass. Turns out that makes a huge difference. Ideally I’d use brain mass*square(neuron count), but that would be overkill...
In building this, did you come across literature about this question of how anthropic measure relates to mass and neuron configuration? I’d love to see any if you have that. I’ve got quite an interest in the anthropic measure binding question, my somewhat unconventional stance influences my decisions regarding animal welfare, so I really ought to read whatever’s out there.
I’m not sure the maceration of male chicks induces any suffering. IIRC, it’s approved as a humane killing method by the SPCA or someone like that.
I was there and I can report that T is awesome in that particular way consistently.
I am really puzzled by those graphs, mm. But as to the Easterlin paradox, it’s still alive: http://repec.iza.org/dp7234.pdf Happiness has been increasing, and so has GDP, but the rates of increase still don’t seem to have much of a relationship.
Crazyism about a topic is the view that something crazy must be among the core truths about that topic. Crazyism can be justified when we have good reason to believe that one among several crazy views must be true but where the balance of evidence supports none of the candidates strongly over the others
Maybe a moratorium concerning soy and beef from the Amazon region would be enough to settle this issue; even so, given that the first driver of deforestation is speculation with land prices (besides illegal timber and mining), I’m afraid such a ban wouldn’t be enough to stop it.
The question then is, where is the value of the land coming from, how much of it is coming from each possible use, loggers, soy farmers, or meat farmers? If you stop those uses, wont speculation stop?
We actually do have a good probability for a large asteroid striking the earth within the next 100 years, btw. It was the product of a major investigation, I believe it was 1⁄150,000,000.
Probabilities don’t have to be a product of a legible, objective or formal process. It can be useful to state our subjective beliefs as probabilities to use them as inputs to a process like that, but also generally it’s just good mental habit to try to maintain a sense of your level of confidence about uncertain events.
There’s no answer for this
Sure there is. Just implement the decision theory whose nature is that which would have been the optimal nature for it to have always had.
That is, implement Logical Decision Theory.
I’m only being a little bit facetious. Logical Decision Theory often seems to me more like a mostly formal statement about the (arguably) perfect policy about coordination and pre-commitment and superrationality, rather than a method for actually unearthing it.
But pondering this statement does seem to have progressed my thinking a lot and I would generally recommend it to others.
The community and events link is broken.
For the sake of coordination, I declare an intent to enter.
(It’s beneficial to declare intent to enter, because if we see that the competition is too fierce to compete with, we can save ourselves some work and not make an entry, while if we see that the competition is too cute to compete with, we can negotiate and collaborate.)
I’ll be working under pretty much Eliezer’s model, where general agency emerges abruptly and is very difficult to align, inspect, or contain. I’m also very sympathetic to Eliezer’s geopolitical pessimism, but I have a few tricks for fording it.
For the sake of balance in comments section, I should mention that contrary to many of the voices here I don’t really see anything wrong with the requirements. For instance, the riddle with AGI existing as long as five years without a singularity really abruptly hitting, though I agree it’s a source of.. tension, it was kind of trivial for me to push through, and the solution was clean.
I developed illustration skills recently (acquired a tablet after practicing composition and anatomy in the background for most of my life and, wow, I can paint pretty good), can narrate pretty well, and I have plenty of musician friends, so although I have no idea what our audiovisual accompaniment is going to be, I’ll be able to produce something. (maybe I’ll just radio-play the ground-level stories)
And can I just say “director of communications” sounds about right, because you’re really directing the hell out of this ;) you’re very specific about what you want. And the specification forms the shape of a highly coherent vision. Sorry I just haven’t really encountered that before, it’s interesting.
(why is the location marker wrong. Did it demand a street address?)
Metaculus currently gives ~20% probability to >60 months
I’d expect the bets there to be basically random. Prediction markets aren’t useful for predictions about far out events: Betting in them requires tying up your credit for that long, which is a big opportunity cost, so you should expect that only fools are betting here. I’d also expect it to be biased towards the fools who don’t expect AGI to be transformative, because the fools who do expect AGI to be transformative have even fewer incentives to bet: There’s not going to be any use for metaculus points after a singularity: They become meaningless, past performance stops working as a predictor of future performance, the world will change too much, and so will the predictors.
If a singularity-expecter wants tachyons, they’re really going to want to get them before this closes. If they don’t sincerely want tachyons, if they’re driven by something else, then their answers wouldn’t be improved by the incentives of a prediction market.
Is the argument here something along the lines of; I find that I don’t want to struggle to do what these values would demand, so they must not be my values?
I hope I’m not seeing an aversion to surprising conclusions in moral reasoning. Science surprises us often, but it keeps getting closer to the truth. Technology surprises us all of the time, but it keeps getting more effective. If you wont accept any sort of surprise in the domain of applied morality, your praxis is not going to end up being very good.