I want to express my deep gratitude to you, Patrick, for running EA Radio for all these years! š Early in my EA involvement (2015-16), I listened to all the EA Radio talks available at the time and found them very valuable.
imp4rtial šø
Excellent! A well-deserved second prize in the Creative Writing Contest.
In my experience, many EAs have a fairly nuanced perspective on technological progress and arenāt unambiguous techno-optimists.
For instance, a substantial fraction of the community is very concerned about the potential negative impacts of advanced technologies (AI, biotech, solar geoengineering, cyber, etc.) and actively works to reduce the associated risks.
Moreover, some people in the community have promoted the idea of ādifferential (technological) progressā to suggest that we should work to (i) accelerate risk-reducing, welfare-enhancing technologies (or ideas generally) and (ii) decelerate technologies (or ideas) with the opposite effects. That said, studying the concrete implications of differential progress seems fairly neglected and deserves to be explored in much greater depth. In line with the above idea, it seems common for EAs to argue that technological progress has been very beneficial in some regardsāimproving human welfare, especially over the last hundreds of years (e.g. here)āwhile it has been harmful in other regards, such as factory farming having led to greater animal suffering.
Utilitarianism.net has also recently published an article on Arguments for Utilitarianism, written by Richard Yetter Chappell. (Iām sharing this article since it may interest readers of this post)
Thanks, itās valuable to hear your more skeptical view on this point! Iāve included it after several reviewers of my post brought it up and still think it was probably worth including as one of several potential self-interested benefits of Wikipedia editing.
I was mainly trying to draw attention to the fact that it is possible to link a Wikipedia user account to a real person and that it is worth considering whether to include it in certain applications (something Iāve done in previous applications). I still think Wikipedia editing is a decent signal of pro-social motivation, experience engaging with specific topics, and of some writing practice. Thus, it seems comparable to me to a personal blog, which you may also include, where relevant, in certain applications as evidence for these things.
Thanks for this comment, Michael! I agree with all the points you make and should have been more careful to compare Wikipedia editing against the alternatives (I began doing this in an earlier draft of this post and then cut it because it became unwieldy).
In my experience, few EAs Iāve talked to have ever seriously considered Wikipedia editing. Therefore, my main objective with this post was to get more people to recognize it as one option of something valuable they might do with a part of their time; I wasnāt trying to argue that Wikipedia editing is the best use of their time, which depends a lot on individual circumstances and preferences.
In fact, Iād expect the opportunity costs for many people in the community to be too high to make Wikipedia editing worth their while, but Iād leave that judgment up to them. That said, some people (like me) will find Wikipedia editing sufficiently enjoyable that it becomes more of a fun hobby and doesnāt compete much with other productive uses of their time.
I strongly agree that we should learn our lessons from this incident and seriously try to avoid any repetition of something similar. In my view, the key lessons are something like:
Itās probably best to avoid paid Wikipedia editing
Itās crucial to respect the Wikipedia communityās rules and norms (Iāve really tried to emphasize this heavily in this post)
Itās best to really approach Wikipedia editing with a mindset of āletās look for actual gaps in quality and coverage of important articlesā and avoid anything that looks like promotional editing
I think it would be a big mistake for oneās takeaway from this episode to be something like āthe EA community should not engage with Wikipediaā.
Two more general lessons that I would add, which have nothing to do with the Vipul incident:
Avoid controversial and highly political topics (editing any such topics makes you much more likely to have your edits reverted, get into āedit warsā, and have bad experiences)
Avoid being drawn into āedit warsā. If another editor is hostile to your edits on a specific page, itās often better to simply move on than to engage.
As an example, look at this overview of the Wikipedia pages that Brian Tomasik has created and their associated pageview numbers (screenshot of the top 10 pages below). The pages created by Brian mostly cover very important (though fringe) topics and attract ~ 100,000 pageviews every year. (Note that this overview ignores all the pages that Brian has edited but didnāt create himself.)
Someone (who is not me) just started a proposal for a WikiProject on Effective Altruism! To be accepted, this proposal will need to be supported by at least 6-12 active Wikipedia editors. If youāre interested in contributing to such a WikiProject, please express āsupportā for the proposal on the proposal page.
This is the best tool I know of to get an overview of Wikipedia article pageview counts (as mentioned in the post); the only limitation with it is that pageview data āonlyā goes back to 2015.
Create a page on biological weapons. This could include, for instance,
An overview of offensive BW programs over time (when they were started, stopped, funding, staffing, etc.; perhaps with a separate section on the Soviet BW program)
An overview of different international treaties relating to BW, including timelines and membership over time (i.e., the Geneva Protocol, the Biological Weapons Convention (BWC), Australia Group, UN Security Council Resolution 1540)
Submissions of Confidence-Building Measures in the BWC over time (including as a percentage of the # of BWC States Parties and split in publicly-accessible and restricted-access)
A graph that visually compares the funding and # of staff in international organizations for the bioweapons regime compared to chemical and nuclear weapons (e.g., the BWC Implentation Support Unit compared to OPCW for chemical and the IAEA and CTBTO PrepCom for nuclear)
(Perhaps include an overview on the global proliferation of high-biosafety labs, e.g. see Global Biolabs)
(Perhaps include a section on how technological advancements may affect the BW threat, e.g., include a graph on the Carlson curve (Mooreās law but for DNA sequencing))
For many people interested in but not yet fully committed to biosecurity, it may make more sense to choose a more general masterās program in international affairs/āsecurity and then concentrate on biosecurity/ābiodefense to the extent possible within their program.
Some of the best masterās programs to consider to this end:
Georgetown University: MA in Security Studies (Washington, DC; 2 years)
Johns Hopkins University: MA in International Relations (Washington, DC; 2 years)
Stanford University: Masterās in International Policy (2 years)
Kingās College London: variety of masterās programs in the War Studies Department (London) (1 year)
Sciences Po: Master in International Security (Paris; 2 years; can be combined with the KCL degree as a dual degree)
ETH Zurich: MSc program in Science, Technology and Policy (Zurich)
(Note that some of these may offer little room to focus on biosecurity specifically, though they may offer other useful courses, e.g. on AI, other emerging technologies, and great power conflict)
The GMU Biodefense Masterās is also offered as an online-only degree.
Georgetown University offers a 2-semester MSc in āBiohazardous Threat Agents & Emerging Infectious Diseasesā. Course description from the website: āa one year program designed to provide students with a solid foundation in the concepts of biological risk, disease threat, and mitigation strategies. The curriculum covers classic biological threats agents, global health security, emerging diseases, technologies, CBRN risk mitigation, and CBRN security.ā
Website traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the websiteās search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search rank. For comparison, the Wikipedia article on utilitarianism has received ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for utilitarianism.net.
Iām not sure what counts as āastronomicallyā more cost effective, but if it means ~1000x more important/ācost-effective I might agree with (ii).
This may be the cruxāI would not count a ~ 1000x multiplier as anywhere near āastronomicalā and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term āastronomicalā is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
Iād like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the communityās resources across (longtermist and neartermist) causes:
TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar. If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes. I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that itās important to heavily support multiple causes, not just to focus on the most appealing one.
Please see my above response to jackmaldeās comment. While I understand and respect your argument, I donāt think we are justified in placing high confidence in this model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldnāt place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion).
No, we probably donāt. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Occasionally there are even claims [among effective altruists] to the effect that āshaping the far future is 1030 times more important than working on present-day issues,ā based on a naive comparison of the number of lives that exist now to the number that might exist in the future.
I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (ā¦) Note that these are arguments about ex ante expected value, not necessarily actual impact. (ā¦) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:
When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.
- Aug 9, 2021, 4:20 PM; 9 points) 's comment on Towards a Weaker Longtermism by (
Time to up your game, Linch! š