The case for a common observatory

Tl;dr: One of the biggest problems facing any kind of collective action today is the fracturing of the information landscape. I propose a collective, issue-agnostic observatory with a mix of algorithmic and human moderation for the purposes of aggregating information, separate from advocacy (i.e. “what is happening”, not “what should happen”).

Introduction

There is a crisis of information happening right now. 500 hours of video are uploaded to Youtube every minute. Extremely rapid news cycles, empathy fatigue, the emergence of a theorised and observed polycrisis, and the general breakdown of traditional media institutions in favour of algorithms designed to keep you on-platform for as long as possible means that we receive more data than we have ever before, but are consequently more easily overwhelmed than ever before. The pace of research output has increased drastically while the pace of research intake (i.e. our reading speed) has not. The recent emergence of AI technology able to manufacture large amounts of spurious disinformation or “botshit” has added to this breakdown in information.

Any kind of corrective or preventative action to address global issues requires an accurate understanding of those issues first. John Green’s video on The Only Psychiatric Hospital in Sierra Leone gives a powerful example why: charities with ample resources but incorrect information donated electrical generators that were too powerful for the hospital’s electrical grid, causing more harm than good. In the same way, misguided, ill-informed, and over-aggressive “assistance” can be worse than no assistance at all.

Why existing traditional information sources are insufficient

Most of us likely rely on some form of news media for information on world events. They are updated around-the-clock by teams of dedicated staff who provide broad-based coverage on a wide variety of events. However, these sources are plagued by well-documented issues surrounding bias, censorship, misaligned economic incentives, billionaire owners etc. Furthermore, they are unlikely to feature a particular focus on cause areas that most EAs are concerned about.

Why existing specialised information sources are insufficient

Good work is currently being done by organisations like BlueDot Impact who work to collect information on cause areas like AI Safety and Biosecurity. However, these sources are also limited in specific ways that may be hard to discern at a first glance.

Update delay

Since these sources position themselves as authorities within their cause areas, they rightfully feature a delay before incorporating speculative announcements or new developments. However, in fast moving fields like AI safety developments are happening at a pace that exceeds the ability for expert review. As such, resources can quickly become outdated or incorrect without being updated as such.

Topical focus

Limited resources means that sites usually focus on one cause area or area of interest. As a result, information becomes fractured and siloed into specialist communities, and opportunities for inter-group interaction falls. This encourages readers to hyper-specialise into one field and may lead them to discount common systemic factors that lead to heightened x-risk across many fields.

More broadly, having many such fractured information sources reduces the visibility of information as a whole by diluting the space of information sources, making it easier for vital resources to become lost in the noise. This results in information transmission being based on ad-hoc sharing rather than coordinated dissemination, reducing the efficiency of information spread.

Cause advocacy

The people designing resources for a cause area usually have preconceived notions about what should be done in that area. More importantly, they have usually come to these conclusions before putting together these resources. This can of course be helpful in setting priorities, but can also reduce the diversity of ideas in complex, fast moving fields.

Since researchers are likely put material they find useful and relevant into a resource for others, specialised resources (especially those which collate links to other resources) are likely to suffer from confirmation bias. This narrows the possibility space for interventions by preventing readers from learning about interventions the authors may not find useful or productive. Furthermore, if small groups of similarly-opinionated experts create entry-level resources that are not subject to scrutiny, a form of anchor bias is likely to take hold in the cause area community as a whole.

To be clear, I am not accusing Bluedot Impact or any other resource of intentional or unintentional bias. I am also not suggesting that these resources are counterproductive. However, the nature of specialised cause groups performing advocacy work is that they are likely to find information which agrees with their position more valuable. In a high risk world where many conclusions are often counterintuitive, putting all of our eggs in one basket, however well designed, is a dangerous risk.

Why community forums are insufficient

While forums like this one are a valuable resource to collect news, insights, and updates, they are diluted due to their multipurpose functions as discussion forums. The moderators of these forums do not have disseminating the news as their first priority, nor should it be. News collection, news presentation, and journalism are also specialised skillsets that are not easily replaced by AI bots or content algorithms.

Proposed model: Joint algorithmic-human observatory

The proposed model involves a separate website with two functions:

  1. Crowdsourced information collection: Modelled on sites like Hacker News or Reddit, users should be able to submit links for other users and moderators to vote on. Unlike thoe sites, there will be no generic “upvote” or “downvote” button. Instead, users will tag content with a variety of emojis based on whether they feel it is relevant to a cause area, of general interest, fair and balanced etc. Comments will not be enabled except as user-submitted factual corrections (i.e. further points of clarification or points of information). Discussion is reserved for forums like the present forum. Ranking of links/​comments will be based on the reddit algorithm, with a bias towards recent, broadly relevant, and high-quality content.

  2. Human information collection: A team of paid editors with subject matter expertise or journalism skills should be retained as staff to process both user submitted links and any news they themselves receive. This would function as a specialised newsroom producing weekly digests or long read articles that act as manual filters for the week’s events. Access to such articles might be gated behind a subscription to recoup costs for running the site.

Critically, this source of information is not an advocate for action. Only news or factual corrections are presented without calls to action. Cause advocacy organisations will not be able to submit op-eds or articles for publication at the observatory. This does not mean that the observatory is “neutral”: global warming is real, but the observatory will not host a post about geoengineering being the answer.

Potential counter-arguments

Possible biases

Counter-argument: The human moderators and editors of the observatory would hold a position of power through which to determine which sources of information are important and which are not. In effect, they would replicate the position of specialist authors collecting information for specialised resources. Even if the observatory has a position of non-advocacy, how information is presented affects how it is received. Something being described as a “1-in-1000 moonshot” is very different from something being described as “a rapidly maturing technology”.

Response: Biases are present in all sources of information. There is no such thing as a non-biased source, as even neutrality is a position on a subject-the position that all the parties involved are equally credible. The existence of the community submitted section should act as a counterbalance to the editorial team and hopefully alert them to developments that they have missed or erroneously dismissed as unimportant.

“Source of Truth” risks

Counter-argument: Sources of truth refer to authoritative sources that other actors in a system refer to as verifiers to determine if their information is accurate. For example, the TPM (Trusted Platform Module) in a computer is a tamper-resistant piece of hardware that certifies the computer’s OS or hardware has not been compromised at boot time. Importantly, information only flows one way from sources of truth: the computer cannot change the TPM, otherwise, malware in the computer would be able to certify itself as safe.

As you can already imagine, any such authoritative source, if compromised, provides a massive security risk. If the TPM is compromised (scroll down to the “Attacks” section of the Wikipedia article), the computer has no way of correcting itself and will blindly trust the compromised TPM. Similarly, a single authoritative information source for a community can produce information gaps or the possibility of spreading misinformation to the community as a whole.

Response: The observatory does not position itself as a single source of truth. Many other sources of truth exist and are used by the observatory as a link hub, reducing the likelihood that the observatory can be compromised to spread misinformation. Furthermore, users in this model would be able to submit corrections to the observatory which can be acted upon by the human staff.

Conclusion

I hope this idea is useful and sparks a fruitful discussion. I look forward to addressing any further ideas on this topic.