Cofounder of the Simon Institute for Longterm Governance and EA Geneva.
konrad
Disclaimer: I have disagreeable tendencies, working on it but biased. I think you’re getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.
I want to add some **speculations** on these roles in the context of the level at which we’re trying to achieve something: individual or collective.
When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).
This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as “failure”, overconfidence seems basically required to keep going as a disagreeable.
By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.
To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.
Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.
If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you’re convinced enough of those beliefs, they often confer power on you in and of themselves.
Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.
Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).
In a world where ground truth doesn’t matter much, the power of disagreeables is to create a mob that isn’t anchored in reality but that achieves the coordination to break out of local realities.
Unfortunately, to us who have insufficient capabilities to achieve their aims—to change not just our local social reality but the human condition—creating a cult just isn’t helpful. None of us have sufficient data or compute to do it alone.
To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won’t always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.
It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.
As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that’s the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.
This is of course not just a norms questions, but also a question of infrastructure and psychology.
EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don’t understand our values nor aren’t very sure about how to understand them much better, reliably. Zoe’s post highlights that it’s too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.
Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.
The amount isn’t trivial at ~600k. Max’ salary also guarantees my financial stability beyond the ~6 months of runway I have. It’s what has allowed us to make mid-term plans and me to quit my CBG.
Yes, happily!
We have published a few additional blog posts of interest:
Thanks a lot for the compliments! Really nice to read.
The metrics are fuzzy as we have yet to establish the baselines. We will do that until the end of September 2021 via our first pilots to then have one year of data to collect for impact analysis.
The board has full power over the decision of whether to continue SI’s existence. In Ralph Hertwig’s words, their role is to figure out whether we “are visionary, entirely naïve, or full of cognitive biases”. For now, we are unsure ourselves. What exactly happens next will depend on the details of the conclusion of the board.
I wrote a piece that I would have liked to post on the forum. However, I just figured out that I can’t post unless I get four more upvotes. So please upvote this, I’m not a bot. Blog post to read here on gdocs.
I think we can assume that people on this forum seek truth and personal growth. Of course, this is challenging for all of us from time to time.
I think having a norm of speaking truthfully and not withholding information is important for community health. Each one of us has to assume the responsibility of knowing our own boundaries and pushing them within reasonable bounds, as few others can be expected to know ourselves well enough. Combined with the fact that in this case people have consciously decided to *opt in* to the discussion by posting a comment, I would think it overly cautious to refrain from replying.
There surely are edge cases that are more precarious and deserve tailored thought but I think this isn’t one.
If you know somebody well enough to think they are pushing their boundaries in unsustainable ways, I would reach out to them and mention exactly that thought in a personal message. Add some advice on how to engage with the community and its norms sustainably, link to posts like this showing that we all struggle with similar problems, and then people can also work through possible problems regarding “not feeling good enough”.
Personally, I’d rather be forced to live in reality than be protected because people worry I might not be able to come to grips with it. One important reason for which I like the EA community is that it feels like we all have consented to hearing the truth, even if it might be uncomfortable and imply labour.
Thanks a lot for this, much appreciated! This gave me the chance to clear up some things for myself. It’s hard to get direct_feedback. ;)
There are two key points I tried to get across with this post and that I should have highlighted more clearly:
Propose new language to talk more productively about network and community building; and
Present and illustrate reasons for why I think this lingo is needed and closer to reality.
Regarding your points:
I) Effectiveness and receiving money: I would want to encourage people who are able to/want to invest significant amounts of time into EA work to figure out what kind of direct, non-”community building” project they could start/contribute to (without significant downside risks) before they start building a local group or alike.
Most of such work will likely look similar in many places: offer career coaching to the most promising people you can find. Being able to coach people requires you to stay on top of things. 1-on-1 discussions leave plenty of room to avoid negative impact and learn quickly.
I could see community development happening in a more meaningful way through such outcome oriented work than through “starting a local group and organizing meetups”. Such concrete work helps to a) develop individuals’ expertise much more directly and b) produces the outcomes that can prove alignment to the larger network with fairly tight feedback loops. Later, they can figure out their comparative advantage and, with support, tackle more risky prospects.
To have the time to do that though, one has to have money. My recommendation here wouldn’t be to simply pay more people to have this time. I could e.g. imagine that the “network development organisation” offers “EA trainings” to promising individuals. If completed successfully, people receive a first grant to build up their community through such direct work. Grants get renewed based on performance on a few standard metrics that can be built upon over time.
Some of this is already happening, but I see much room for improvement by modeling these structures more explicitly and driving their development more openly.
II) Conclusion: I’d recommend to define labels, roles and accountabilities within the network more clearly.
We often label CEA, LEAN, and EAS as “community building orgs”—but all three actually have quite different roles. I believe that it would be better if these organisations explicitly defined their respective roles. It is not clear to me that these three organisations really are working on similar things beyond the fact that the same label is used for their activities. I would claim they mostly aren’t, and the few things they all do could be more efficient and improved faster by only one.
What is different from reality? Mostly the labels and definitions—which I hope should give a clearer sense of what everyone is doing and thereby ease the development of the network as a whole.
I aimed to contribute to a common understanding of what the network is, what communities are, how to build good ones, who has which responsibilities, how to define them better, how to make sure the network maintains high quality, and how to make people learn/understand all of that.
In the process of writing the other articles in our series on EA Geneva’s “community building”, we got much feedback that especially the latter point of “how to make people learn/see/understand all of that” is currently a big issue. Many people seem upset with how they are received when they are trying to contribute/start something in good will. Due to a lack of clarity, they end up wasting their own or EA orgs time and it is frustrating for everyone involved.
We could make network building and community building much more effective if we employed better terminology, had a clearer vision of what the ideal network development structure might look like and could be collectively working towards it—or at least discuss it better. I hope this contributes a little to that process.
Awesome, thanks a lot for this work!
From what I understood when talking to CEA staff, this is also thought to replace handing out copies of Doing Good Better, yes? If so, I would emphasise this more explicitly, too.
To avoid spamming more comments, one final share: our resource repository is starting to take shape. Two recent additions that might be of use to others:
United Nations for the future—a collection of key international texts for long-term governance
An overview of fields to improve decision-making in policy systems
In the works: a brief guide to decision-making on wicked problems, an analysis of 28 policymaker interviews on “decision-making under uncertainty and information overload” and a summary of our first working paper.
We have set up an RSS feed for the blog (or just subscribe to the ~quarterly newsletter).
And last but not least, we now have fiscal sponsorship for tax-deductible donations from the US, UK and the Netherlands via EA Funds and a lot of room for more funding.
Hi! We uploaded drafts for two pieces last week:
The preprint of “Computational Policy Process Studies” (also has a video presentation linked to it)
A draft of working paper #1 “Policymaking for the Long-term Future: Improving Institutional Fit”
I really liked this comment. I will split up my answer into separate comments to make the discussion easier to follow. Thanks also for sharing Hard-to-reverse decisions destroy option value, hadn’t read it and it seems under-appreciated.
FWIW, this sounds pretty wrongheaded to me: anonymization protects OP from more distant (mis)judgment while their entourage is aware of them having posted this. That seems like fair game to me and not at all as you’re implying.
We didn’t evolve to operate at these scales, so this appears like a good solution.
As a data point:
We have organized different “collective ABZ planning sessions” in Geneva that hinge on peer feedback given in a setting I would call a light version of CFAR’s hamming circles.
This has worked rather well so far and with the efficient pre-selection of the participants can probably scale quite well. We tried to do so at the Student Summit and it seemed to have been useful to 100+ participants, even though we didn’t get to collect detailed feedback in the short time frame.
Already providing the Schelling point for people to meet, pre-selecting participants & improving the format seems potentially quite valuable.
Disclaimer: I have aphantasia and it seems that my subjective conscious experience is far from usual.[1] I don’t have deep meditation experience; I have meditated a cumulative 300+ hours since 2015. I have never meditated for more than 2 hours a day.
I’ve found Headspace’s mindfulness stuff unhelpful, albeit pleasant. It was just not what I needed but I only figured it out after a year or so. Metta (loving-kindness) is the practice I consistently benefit from most, also for my attention and focus. It’s the best “un-clencher” when I’m by myself. And it can get me into ecstatic states or cessation. Especially “open-monitoring” metta is great for me.
Related writing that resonates with me and has shaped my perspective and practice are:Nick Cammarata on attachment/”clenching”, meditation as a laxative and how that affects happiness, clarity and motivation.
A lot of the Qualia Research Institute’s work is tangentially fascinating. They summarize the neuroscience on meditation quite well (afaict) and their scales of pain and pleasure map onto my experiences, too.
Maybe some of this can help you identify your own path forward?
I have a friend who did a personalized retreat with a teacher for 3 days and made major breakthroughs; i.e. overcoming multi-year depression, getting to 6th Jhana. The usual retreats are probably inefficient, it seems better to have a close relationship and tighter feedback loops with a teacher.
- ^
I don’t have an inner voice, I don’t have mental imagery, my long-term memory is purely semantic (not sensory or episodic) and I have little active recall capacity. Essentially, my conscious mind is exceptionally empty as soon as I reduce external input. That doesn’t mean I’m naturally great at focusing (there’s so much input!). I’m just not fussed about things for longer than a few minutes because my attention is “stuck” in the present. I forget most things unless I build good scaffolds. I don’t think this architecture is better or worse—there are tradeoffs everywhere. Happy to talk more about this if it seems relevant to anyone.
Maybe a helpful reframe that avoids some of the complications of “interesting vs important” by being a bit more concrete is “pushing the knowledge frontier vs applied work”?
Many of us get into EA because we’re excited about crucial considerations type things and too many get stuck there because you can currently think about it ~forever but it practically contributes 0 to securing posterity. Most problems I see beyond AGI safety aren’t bottlenecked by new intellectual insights (though sometimes those can still help). And even AGI safety might turn out in practice to come down to a leadership and governance problem.
Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.
Our World in Data has created two great posts this year, highlighting how the often proposed dichotomy between economic growth & sustainability is false.
In The economies that are home to the poorest billions of people need to grow if we want global poverty to decline substantially, Max Roser points out that given our current wealth,
the average income in the world is int.-$16 per day
Which is far below what we’d think of as the poverty line in developed countries. This means that mere redistribution of what we have is insufficient—we’d all end up poor and unable to continue developing much further because we’re too occupied with mere survival. In How much economic growth is necessary to reduce global poverty substantially?, he writes:
I found that $30 per day is, very approximately, the level below which people are considered poor in high-income countries.
in the section Is it possible to achieve both, a reduction of humanity’s negative impact on the environment and a reduction of global poverty?, he adds:
As you will see in our writing there are several important cases in which an increased consumption of specific products gets into unavoidable conflict with important environmental goals; in such cases we aim to emphasize that we all as individuals, but also entire societies, should strongly consider to reduce the consumption of these products – and thereby reduce the use of specific resources and forgo some economic growth – to achieve these environmental goals. We believe a clear understanding of which specific reductions in production and consumption are necessary to reduce our impact on the environment is a much more forceful approach to reducing environmental harm than an unspecific opposition to economic growth in general.
So for discussions on how to approach individual “consumption” or policymaking around it, we could start a list of specific products to avoid. Would somebody be up for compiling this? It would be a resource I’d link to quite regularly. You can apparently just extract them from the 13 links Max Roser put just above the paragraph cited above. It would make for a great, short and crisp EA Forum post, too.
Didn’t downvote but my two cents:
I am unsure about the net value of encouraging people to simply need less management and wait for less approval.
Some (most?) people do need guidance until they are able to run projects independently and successfully, ignoring the need doesn’t make it go away.
The unilateralist’s curse is scary. A lot of decisions about EA network growth and strategy that the core organizations have come to are rather counter-intuitive to most of us until we got the chance to talk it through with someone who has spent significant amounts of time thinking about them.
Even with value-aligned actors, coordination might become close to impossible if we accelerate the amount of nodes without accelerating the development of culture. I currently prefer preserving the option of coordination being possible over “many individuals try different things because coordination seemed too difficult a problem to overcome”.