Co-Director of Equilibria Network: https://eq-network.org/
I try to write as if I were having a conversation with you in person.
I would like to claim that my current safety beliefs are a mix between Paul Christiano’s, Andrew Critch’s and Def/Acc.
Jonas Hallgren 🔸
First and foremost, I’m low confidence here.
I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.
What is the difference between x-risk and s-risk/increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere)
Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our current governance conditions a bit like correction systems for individual errors. If they pile up, we go off the rail and increase x-risk as well as chances of a bad future.
So a good decision making system should both account for x-risk and value estimation, therefore the solution is the same and it is a false dichotomy?
(I might be wrong and I appreciate the slider question anyway!)
First and foremost, I agree with the point. I think looking at this especially from a lens of transformative AI might be interesting. (Coincidentally this is something I’m currently doing using ABMs with LLMs)
You probably know this one but here’s a link to a cool project: https://effectiveinstitutionsproject.org/
Dropping some links below, I’ve been working on this with a couple of people in Sweden for the last 2 years, we’re building an open source platform for better democratic decision making using prediction markets:https://digitaldemocracy.world/flowback-the-future-of-democracy/
The people I’m working with there are also working on:
I know the general space here so if anyone is curious I’m happy to link to people doing different things!
You might also want to check out:
I guess a random thought I have here is that you would probably want video and you would probably want it to be pretty spammable so you have many shots at it. Looking at twitter we already see like a large amounts of bots around commenting on things which is like a text deepfake.
Like I can see in a year or so when SORA is good enough that creating a short form stabel video is easy we will see a lot more manipulation of voters through various social media through deepfakes.
(I don’t think the tech is easy enough to use yet for it to be painless to do it even though it is possible. I spent a couple of hours trying to set this up for a showcase once and you had to do some fine-tuning and training stuff, there was no plug and play which is probably a bottleneck for now.)
FWIW, I find that if you analyze places where we’ve successfully aligned things in the past (social systems or biology etc.) you find that the 1th and 2nd types of alignment really don’t break down in that way.
After doing Agent Foundations for a while I’m just really against the alignment frame and I’m personally hoping that more research in direction will happen so that we get more evidence that other types of solutions are needed. (e.g alignment of complex systems such as has happened in biology and social systems in the past)
FWIW, I completely agree with what you’re saying here and I think that if you seriously go into consciousness research and especially for what we westerners more label as a sense of self rather than anything else it quickly becomes infeasible to hold a position that the way we’re taking AI development, e.g towards AI agents will not lead to AIs having self-models.
For all matters and purposes this encompasses most theories of physicalist or non-dual theories of consciousness which are the only feasible ones unless you want to bite some really sour apples.
There’s a classic “what are we getting wrong” question in EA and I think it’s extremely likely that we will look back in 10 years and say, “wow, what are we doing here?”.
I think it’s a lot better to think of systemic alignment and look at properties that we want for the general collective intelligences that we’re engaging in such as our information networks or our institutional decision making procedures and think of how we can optimise these for resillience and truth-seeking. If certain AIs deserve moral patienthood then that truth will naturally arise from such structures.
(hot take) Individual AI alignment might honestly be counter-productive towards this view.
I’m not a career councellor so take everything with a grain of salt but you did publically post this asking for unsolicited advice, so here you go!
So, more directly if you’re thinking of EA as a community that needs specific skills and you’re wondering what to do, your people management skills, strategy & general leadership skills are likely to be high in demand from other organisations: https://forum.effectivealtruism.org/posts/LoGBdHoovs4GxeBbF/meta-coordination-forum-2024-talent-need-survey
Someone else mentioned that enjoyment can be highly organisation specific and even specific to the stage of the organisation.My thought is something like:
Take a year off and commit yourself to only doing exploration during the year, try out working in different organisations at different scales, maybe more early stage maybe later stage, I’m sure you got some knowledge on what is best here.
Here’s a fun book that mentions optimal exploration exploitation, I think of this a lot when it comes to my own life, it might be useful:
I thought this book was pretty good for a very specific strategy of quick career role exploration and how you can go about doing that:
Think about what roles that you can leverage your strategic people management & leadership skills whilst still enjoying the work? If you really want to do more coding then a CTO or similar role somewhere probably makes a lot of sense.
Maybe you could work at a deep tech company?
Maybe early stage startups is something you enjoy more, maybe you’re more of a zero to one type of person?
Figure out what it exactly is that you don’t enjoy, you might be surprised, you might not be.
Test, test, test. If you’ve found yourself able to do this in the past you have a lot of clout to be able to do it again, it is a lot easier for an executive to get investment again, know what people to hire, etc.
I know a bunch of people who have felt similar things to what you’re doing in this moment, specifically people in executive managerial roles. The pattern that I see from everyone is that take a break (shocker!) and then it really varies when it comes to how fast they get back into it again.
Maybe there are specific mental health things you can improve that makes you 20% more effective at listening that can really help at the next thing?
I like to think of it as them decompressing and learning the lessons from the past very focused period before getting back at it again.
Those are some random thoughts, best of luck to you!
So I’ll just give some reporting on a vibe I’ve been feeling on the forum.
I feel a lot more comfortable posting on LessWrong compared to the EA forum because it feels like there’s a lot more moral outrage here? Like if I go back 3 or 4 years I felt that the forum was a lot more open to discussing and exploring new ideas. There’s been some controversies recently around meat-eater problem stuff and similar and I can’t help but just feel uncomfortable posting stuff with how people have started to react?
I like the different debate weeks as I think it sets up a specific context to create more content which is quite great. Maybe it’s a vibe thing, maybe it’s something else but I feel that the virtue of open-hearted truthseeking is missing compared to a couple of years back and it makes me want to avoid posting.
I do believe that the post standard should be lowered at least a bit and for things to be more exploratory again. So uhhhm, more events that invite more community writing and engagement?
I want to preface that I don’t have a strong opinion here, just some curiosity and a question.
If we are focusing on second order effects wouldn’t it make sense to bring up something like moral circle expansion and its relation to ethical and sustainable living over time as well?
From a long-term perspective, I see one of the major effects of global health being better decision making through moral circle expansion.
My question to you is then what time period you’re optimising for? Does this matter for the argument?
Thank you for that substantive response, I really appreciate it! It was also very nice that you mentioned the Turner et.al definitions, I wasn’t expecting that.
(Maybe write a post on that? There’s a comment that mentions uptake from major players in the EA ecosystem and maybe if you acknowledge you understand the arguments they would be more sympathetic? Just a quick thought but it might be worth engaging there a bit more?)
I just wanted to clarify some of the points I was trying to make yesterday as I do realise that they didn’t all get across as I wanted them to.I completely agree with you on the advancing progress point, I personally am quite against it from a “general”-level, I do not believe that we will be able to counterfactually change the “rowing” speed that much in the grand scheme of things. I also believe that is the conclusion of Toby’s posts if I remember correctly. Toby was rather stating that existential risk reduction is worth a lot compared to any progress that we might be able to make. “Steering” away from the bad stuff is worth more. (That’s the implicit claim from the modelling even though he’s as epistemically humble as you philosophers always are (which is commendable!).)
Now for the power-seeking stuff. I appreciate your careful reasoning about these things and I see what you mean in that there’s no threat model from that claim in itself. If we say that the classical way it is construed is something that is equivalent to minimizing free energy, this is a tautological statement and doesn’t help for existential risk.
I think I can agree with you that we’re not clear enough about the existential risk angle to have a clearly defined goal for what to do. I do think there’s an argument there but that we have to be quite clear with how we’re defining it for it to make foundational sense. A question that arises is if in the process of working on it we get more clarity about what it fundamentally is, similar to a startup figuring out what they’re doing along the way? It might still be worth the resources from a unknown unknown perspective and institutional practices shifting perspective if that makes sense? TAI is such a big thing and it will only happen once so spending those resources on relatively shaky foundations might still make sense?
I’m, however, not sure that this is the case and Wei Dai for example has an entire agenda about “metaphilosophy” where the claim is that we’re too philosophically confused to make sense of alignment. In general, I would agree that ensuring the philosophical and mathematical basis is very important to coordinate the field and it is something I’ve been thinking about for a while.
I personally am trying to import ideas from existing fields that deal with generally intelligent agents in biology and cognitive science such as Active Inference and Computational Biology into the mix to see how TAI will affect society. If we see smaller branches of science as specific offshoots of philosophy then I think the places with the most rigorous thinking on the foundations are the ones that have dealt with it for a long time. I’ve found a lot of interesting models about misalignment in these areas that I think can be transported into the AI Safety frame.
I really appreciate the deconstructive approach that you have to the intellectual foundations of the field. I do believe that there are alternatives to the classic risk story but you have to some extent break down the flaws in the existing arguments in order to advocate for new arguments.
Finally, where I think these threat models come from are arguments similar to the ones in What Failure Looks Like from Paul Christiano and the going out with a wimper idea. This is also explored in Yuval Noah Harari’s books Nexus and Homo Deus. This threat model is more similar to the authoritian capture idea compared to something like a runaway intelligence explosion.
I’m looking forward to more work in this area from you!
Thank you for this post David!
I’ve from time to time engaged with my friends in discussion about your criticisms of longtermism and some existential risk calculations. I found that this summary post of your work and interaction calrifies my perspective on the general “inclination” that you have in engaging with the ideas, one that seems like a productive one!Sometimes, I felt that it didn’t engage with some of the core underlying claims of longtermism and exisential risk which did annoy me.
I want to respect the underlying time spend assymmetry of the following question as I feel I could make myself less ignorant if I had the time to spend which I feel I currently do not have. But what are your thoughts on Toby Ord’s perspective and posts on existential risk?: https://forum.effectivealtruism.org/posts/XKeQbizpDP45CYcYc/on-the-value-of-advancing-progress
and:
https://forum.effectivealtruism.org/posts/hh7bgsDzP6rKZ5bbW/robust-longterm-comparisons
I felt that some of the arguments where about discount rates and that they didn’t really make that much moral sense to me, neither did person-affecting views. I have hard time seeing the arguments for them and maybe that’s just the crux of the matter.
The following will be unfair to say as I haven’t spent the time required to fully understand your models but I sometimes feel that there are deeper underlying assumptions and questions that you pass by in your arguments.
I will be going to a domain I know well, AI Safety. For example, I agree with the power-seeking arguments not being fully true, especially not the early papers yet it doesn’t engage with later follow up work such as:
https://arxiv.org/pdf/2303.16200
Finally I believe that for the power-seeking claim, there’s a large amount of evidence for power-seeking within real world systems. For me it seems an overstep to reject power-seeking due to MIRI work?
You can redefine power-seeking itself as minimizing free energy which is in itself a theory of predictive processing or Active Inference and that has showed to have remarkable predictive capacity for saying useful things about systems that are alive. Yes a specific interpretation of power-seeking may not hold true but for me it is throwing the baby out with the bathwater.
I would love to hear your thoughts here and I’m looking forward to more good-faith discussions! (this is not sarcasm but I’m genuinely happy that you’re engaging with good faith arguments!)
Edit: I do want to clarify that I do not believe that any AI system will converge towards instrumental goals and that it does make sense to question the foundations of the AI Safety assumptions and that I appluade you for doing so. It is rather a question of how much it will do so and under what conditions, in what systems it will do so. (I also made the language less combative)
I’ve found a lot of my EA friends falling into this decision paralysis so thank you for this post, I will link this to them!
I just did different combinations of the sleep supplements, you still get the confounder effects but it removes some of the cross-correlation. So Glycine 3 days, no magnesium followed by magnesium 3 days, no glycine e.t.c. It’s not necessarily going to give you a high accuracy but you can see if it works or not and a rough effect size
I use bearable for 3 months at a time to get a picture of what is currently working. You can track effect sizes of supplements in sleep quality for example if you also have a way of tracking your sleep.
Funnily enough, I noticed there were a bunch of 80⁄20 stuff in my day through using bearable. I found doing a cold shower, loving kindness meditation in the morning and getting sunlight in the morning were like a difference of 30% in energy and enjoyment so I now do these religiously and it has worked wonders. (I really like bearable for these sorts of experiments.)
Sorry for not noticing the comment earlier!
Here’s the Claude distillation based on my reasoning on why to use it:
Reclaim is useful because it lets you assign different priorities to tasks and meetings, automatically scheduling recurring meetings to fit your existing commitments while protecting time for important activities.
For example, you can set exercising three times per week as a priority 3 task, which will override priority 2 meetings, ensuring those exercise timeblocks can’t be scheduled over. It also automatically books recurrent meetings so they fit into your existing schedule, like for team members or mentors/mentees.
This significantly reduces the time and effort spent on scheduling, as you can easily add new commitments without overlapping more important tasks. The main advantage is the ability to set varying priorities for different tasks, which streamlines the process of planning weekly and monthly calls, resulting in almost no overhead for meeting planning and making it simple to accommodate additional commitments without conflicting with higher-priority tasks..
Thanks Jacques! I was looking for an upgrade to some of my LLM tools. I was looking for some IDEs and I’ll check that out.
The only tip I’ve got is using reclaim.ai instead of calendly for automatic meeting scheduling, it slaps.
Thanks! That post adresses what I was pointing at a lot better than I did in mine.
I can see from your response that I didn’t get across my point as well as I wanted to but I appreciate the answer none the less!
It was more a question of what leads to the better long-term consequences rather than combining them.
[Question] What does the systems perspective say about effective interventions?
It seems plausible animals have moral patienthood and so the scale of the problem is larger for animals whilst also having higher tractability. At the same time, you have cascading effects of economic development into better decision making. As a longtermist, this makes me very uncertain on where to focus resources. I will therefore put myself centrally to signal my high uncertainty.
I think that still makes sense under my model of a younger and less tractable field?
Experience comes partly from the field being viable for a longer period of time since there can be a lot more people who have worked in that area in the past.The well-described steps and concrete near-term goals can be described as a lack of easy tractability?
I’m not saying that it isn’t the case that the proposals in longtermism are worse today but rather that it will probably look different in 10 years? A question that pops up for me is about how great the proposals and applications were in the beginning of animal welfare as a field. I’m sure it was worse in terms of legibility of the people involved and the clarity of the plans.(If anyone has any light to shed on this, that would be great!)
Maybe there’s some sort of effect where the more money and talent a field gets the better the applications get. To get there you first have to have people spend on more exploratory causes though? I feel like there should be anecdata from grantmakers on this.
I felt that this post might be relevant for longtermism and person affecting views so I had claude write up a quick report on that:
In short: Rejecting the SWWM 💸11% pledge’s EV calculation logically commits you to person-affecting views, effectively transforming you from a longtermist into a neartermist.
Example: Bob rejects investing in a $500 ergonomic chair despite the calculation showing 10^50 * 1.2*10^-49 = 12 lives saved due to “uncertainty in the probabilities.” Yet Bob still identifies as a longtermist who believes we should value future generations. This is inconsistent, as longtermism fundamentally relies on the same expected value calculations with uncertain probabilities that SWWM uses.
The 🔮 Badge
If you’ve rejected the SWWM 💸11% Pledge while maintaining longtermist views, we’d appreciate if you could add the 🔮 “crystal ball” emoji to your social media profiles to signal your epistemic inconsistency.
FAQ
Why can’t I reject SWWM but stay a longtermist? Both longtermism and SWWM rely on the same decision-theoretic framework of accepting tiny probabilities of affecting vast future populations. Our analysis shows the error bars in SWWM calculations (±0.0000000000000000000000000000000000000000000001%) are actually narrower than the error bars in most longtermist calculations.
What alternatives do I have?
Accept the SWWM 💸11% pledge (consistent longtermist)
Reject both SWWM and longtermism (consistent person-affecting view)
Add the 🔮 emoji to your profile (inconsistent but transparent)
According to our comprehensive Fermi estimate, maintaining consistency between your views on SWWM and longtermism is approximately 4.2x more philosophically respectable.