Cofounder of the Simon Institute for Longterm Governance and EA Geneva.
konrad
Disclaimer: I have aphantasia and it seems that my subjective conscious experience is far from usual.[1] I don’t have deep meditation experience; I have meditated a cumulative 300+ hours since 2015. I have never meditated for more than 2 hours a day.
I’ve found Headspace’s mindfulness stuff unhelpful, albeit pleasant. It was just not what I needed but I only figured it out after a year or so. Metta (loving-kindness) is the practice I consistently benefit from most, also for my attention and focus. It’s the best “un-clencher” when I’m by myself. And it can get me into ecstatic states or cessation. Especially “open-monitoring” metta is great for me.
Related writing that resonates with me and has shaped my perspective and practice are:Nick Cammarata on attachment/”clenching”, meditation as a laxative and how that affects happiness, clarity and motivation.
A lot of the Qualia Research Institute’s work is tangentially fascinating. They summarize the neuroscience on meditation quite well (afaict) and their scales of pain and pleasure map onto my experiences, too.
Maybe some of this can help you identify your own path forward?
I have a friend who did a personalized retreat with a teacher for 3 days and made major breakthroughs; i.e. overcoming multi-year depression, getting to 6th Jhana. The usual retreats are probably inefficient, it seems better to have a close relationship and tighter feedback loops with a teacher.
- ^
I don’t have an inner voice, I don’t have mental imagery, my long-term memory is purely semantic (not sensory or episodic) and I have little active recall capacity. Essentially, my conscious mind is exceptionally empty as soon as I reduce external input. That doesn’t mean I’m naturally great at focusing (there’s so much input!). I’m just not fussed about things for longer than a few minutes because my attention is “stuck” in the present. I forget most things unless I build good scaffolds. I don’t think this architecture is better or worse—there are tradeoffs everywhere. Happy to talk more about this if it seems relevant to anyone.
Thanks for writing this up, excited for the next!
One major bottleneck to adoption of software & service industries is that the infrastructure doesn’t exist—more than 50% of people don’t have access to the bandwidth that makes our lives on the internet possible. https://www.brookings.edu/articles/fixing-the-global-digital-divide-and-digital-access-gap/ (That’s also not solved by Starlink because it’s too expensive.)
For export of services to benefit the workers, you’d need local governance infrastructure that effectively maintains public goods, which also currently doesn’t exist for most people.
As you hint at, access to the digital economy helps more developed areas at best, the worst off don’t benefit. The poverty trap many are in is unfortunately harder to crack, and requires substantial upfront investment, not trickle down approaches. But most countries cannot get loans for such efforts and companies have little incentive to advance/maintain such large public goods.
I haven’t thought about this enough and would appreciate reading reactions to the following: For lasting poverty alleviation, I’d guess it’s better to focus on scalable education, governance and infrastructure initiatives, powered by locals to enable integration into the culture. Does it seem correct that the development of self-determination creates positive feedback loops that also aid cooperation?
Also, this can all be aided by AI, but focusing on AI, as some suggest in the comments, seems unlikely to succeed at solving economic & governance development in the poorest areas. Would you agree that AI deployment can’t obviously reduce the drivers of coordination failures at higher levels of governance, as those are questions of inter-human trust?
I didn’t say Duncan can’t judge OP. I’m questioning the judgment.
FWIW, this sounds pretty wrongheaded to me: anonymization protects OP from more distant (mis)judgment while their entourage is aware of them having posted this. That seems like fair game to me and not at all as you’re implying.
We didn’t evolve to operate at these scales, so this appears like a good solution.
Dear Nuño, thank you very much for the very reasonable critiques! I had intended to respond in depth but it’s continuously not the best use of time. I hope you understand. Your effort has been thoroughly appreciated and continues to be integrated into our communications with the EA community.
We have now secured around 2 years of funding and are ramping up our capacity . Until we can bridge the inferential gap more broadly, our blog offers insight into what we’re up to. However, it is written for a UN audience and non-exhaustive, thus you may understandably remain on the fence.
Maybe a helpful reframe that avoids some of the complications of “interesting vs important” by being a bit more concrete is “pushing the knowledge frontier vs applied work”?
Many of us get into EA because we’re excited about crucial considerations type things and too many get stuck there because you can currently think about it ~forever but it practically contributes 0 to securing posterity. Most problems I see beyond AGI safety aren’t bottlenecked by new intellectual insights (though sometimes those can still help). And even AGI safety might turn out in practice to come down to a leadership and governance problem.
This sounds great. It feels like a more EA-accessible reframe of the core value proposition of Nora and my post on tribes.
tl;dr please write that post
I’m very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA’s community health team. But if I understand correctly, they’re not that up front about why they’re reaching out. Being more “on the nose” about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that’s a question of qualified manpower—arguably our most limited resource—but we shouldn’t let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.
Yes, happily!
Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.
The amount isn’t trivial at ~600k. Max’ salary also guarantees my financial stability beyond the ~6 months of runway I have. It’s what has allowed us to make mid-term plans and me to quit my CBG.
Update on the Simon Institute: Year One
The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you’re interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement.
You can read more in our 2021 review and 2022 plans. We also have significant room for more funding, as we have only started fundraising again last month.
In my model, strong ties are the ones that need most work because they have highest payoff. I would suggest they generate weak ties even more efficiently than focusing on creating weak ties.
This hinges on the assumption that the strong-tie groups are sufficiently diverse to avoid insularity. Which seems to be the case at sufficiently long timescales (e.g 1+years) as most strong tie groups that are very homogenous eventually fall apart if they’re actually trying to do something and not just congratulate one another. That hopefully applies to any EA group.
That’s why I’m excited that, especially in the past year, the CBG program seems to be funding more teams in various locations, instead of just individuals. And I think those CB teams would do best to build more teams who start projects. The CB teams then provide services and infrastructure to keep exchange between all teams going.
This suggests I would do fewer EAGx (because EAGs likely cover most of that need if CEA scales further) and more local “charity entrepreneurship” type things.
EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don’t understand our values nor aren’t very sure about how to understand them much better, reliably. Zoe’s post highlights that it’s too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.
Disclaimer: I have disagreeable tendencies, working on it but biased. I think you’re getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.
I want to add some **speculations** on these roles in the context of the level at which we’re trying to achieve something: individual or collective.
When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).
This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as “failure”, overconfidence seems basically required to keep going as a disagreeable.
By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.
To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.
Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.
If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you’re convinced enough of those beliefs, they often confer power on you in and of themselves.
Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.
Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).
In a world where ground truth doesn’t matter much, the power of disagreeables is to create a mob that isn’t anchored in reality but that achieves the coordination to break out of local realities.
Unfortunately, to us who have insufficient capabilities to achieve their aims—to change not just our local social reality but the human condition—creating a cult just isn’t helpful. None of us have sufficient data or compute to do it alone.
To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won’t always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.
It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.
As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that’s the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.
This is of course not just a norms questions, but also a question of infrastructure and psychology.
Thank you (and an anonymous contributor) very much for this!
you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
Between “SFE(-ish) people” and “non-SFE people”, indeed.
What do you mean [by “as a result of this deconfusion …”]?
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
Does that make sense?
Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.
Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
Thanks for starting this discussion! I have essentially the same comment as David, just a different body of literature: policy process studies.
We reviewed the field in the context of our Computational Policy Process Studies paper (section 1.1).From that, I recommend Paul Cairney’s work, e.g. Understanding public policy (2019), and Weible & Sabatier’s Theories of the Policy Process (2018).
Section 4 of the Computational Process Studies paper contains research directions we think are promising and can be investigated with other methods, too. The paper was accepted by Complexity and is currently undergoing revisions—the reviewers liked our summary and thrust, just the maths is too basic for the audience, so we’re expanding the model. Section 1 of our Long-term Institutional Fit working paper (update in the works, too) also ends with concrete questions we’d like answered.
Not sure I correctly understand your situation (I have not lived in the bay for more than a few weeks), but I think it can be worth doing the following:
State your affinity for EA, maybe even with some explanation
Let people get the wrong impression of what it means to you anyway
[Highly contextual] correct this impression, either through immediate explanation or letting your actions speak
-> over time, this can help everyone see the core value of what you cherish and reduce the all too common understanding of EA as an identity (within and outside of EA). We all need to work on not identifying with our convictions to avoid being soldiers.
Most interactions are iterative, not one offs. You could help people understand that EA is not == AI xrisk.
If you think EA is about a general approach to doing good, explaining this more often would help you and the xrisk people. Identities are often pushed onto us and distort discourse. I see it as part of my responsibility to counteract this wherever I can. Otherwise, my affiliation would mostly be a way to identify myself as “in-group”—which reinforces the psycho-social dynamics that build the “out-group” and thus identity politics.
Your example seems to be an opportunity to help people better understand EA or even to improve EA with the feedback you get. You don’t necessarily have to stay on top of the news—on the contrary, it helps if you show that everyone can make it their own thing as long as the basic tenets are preserved.
I understand this might be effortful for many. I don’t want to pressure anyone into doing this because it can also look defensive and reinforce identity politics. I figured it might be worth laying out this model to make it easier for people to sit through the discomfort and counteract an oversimplified understanding of what they cherish—whether that’s EA or anything else.