Leave me anonymous feedback: https://āādocs.google.com/āāforms/āād/āāe/āā1FAIpQLScB5R4UAnW_k6LiYnFWHHBncs4w1zsfpjgeRGGvNbm-266X4w/āāviewform
John M Bridge šø
One of the reasons I no longer donate to EA Funds so often is that I think their funds lack a clearly stated theory of change.
For example, with the Global Health and Development fund, Iām confused why EAF hasnāt updated at all in favour of growth-promoting systemic change like liberal market reforms. It seems like there is strong evidence that economic growth is a key driver of welfare, but the fund hasnāt explained publicly why it prefers one-shot health interventions like bednets. It may well have good reasons for this, but there is absolutely no literature explaining the fundās position.
The LTFF has a similar problem, insofar as it largely funds researchers doing obscure AI Safety work. Nowhere does the fund openly state: āwe believe one of the most effective ways to promote long term human flourishing is to support high quality academic research in the field of AI Safety, both for the purposes of sustainable field-building and in order to increase our knowledge of how to make sure increasingly advanced AI systems are safe and beneficial to humanity.ā Instead, donors are basically left to infer this theory of change from the grants themselves.
I donāt think we can expect to drastically increase the take-up of funds without this sort of transparency. Iām sure the fund managers have thought about this privately, and that they have justifications for not making their thoughts public, but asking people to pour thousands of pounds/ādollars a year into a black box is a very, very big ask.
For some reason the Forum isnāt letting me update the post directly, so I want to highlight another core assumption which I didnāt make explicit in the original post. Once the Forum starts working again Iāll slot this into the post itself.
Core Assumption 2 - the Rule of Law holds together:
Later in the sequence, Iām planning to consider how a deterioration in the Rule of Law following the development of WGAI might impact the viability of the Clause. This could vary considerably by jurisdiction. For example, English constitutional law allows the legislature to break its own rules[1] if it wants to, giving Britain a unique ability amongst potential Developer host states to render the Clause inert simply by legislating that the Agreement was unlawful, or that the Developerās assets are property of the Crown.
For the moment, however, I am assuming that the Rule of Law will be largely untouched by the development of WGAI. I am doing this because it is important to explore how things might play out in a best-case scenario, where all of the relevant actors decide to play by the book. The conclusions in my post can then inform a broader analysis of the viability of the Clause in scenarios where actorsā behaviour is further from the ideal.
- ^
CTRL+F āparliament had the power to make any law except any law that bound its successorsā to see Wikipediaās summary of this topic.
- ^
~80% of the applications are speculative, from people outside the EA community and donāt even really understand what we do...
Out of interestādo you folks tend to hire outside the EA community? And how much does EA involved-ness affect your evaluation of applications?
I ask as I know some really smart and talented people working on development outside of EA who could be great founders, and Iād like to know if itās worth encouraging them to apply.
I should clarifyāby pocketing stuff, it ends up in the automatic queue for things to read. That way, I donāt really have to think about what to read next, and the things I want to read just pop up anyway.
Not sure if youāve already tried it, but I find Pocket and Audible really helps with this. It means I can just pop it on my headphones whenever Iām walking anywhere without needing to sit down and decide to read it.
Cuts back on the activation energy, which in turn increases how much I āreadā.
Looking for an accountability buddy:
Iām working on some EA-relevant research right now, but Iām finding it hard to stay motivated, so Iām looking for an accountability buddy.
My thought is that we could set ~4hrs a week where we commit to call and work on our respective projects, though Iām happy to be flexible on the amount of time.
If youāre interested, please reach out in the comments or DM me.
NB: One reason this might be tractable is that lots of non-EA folks are working on data protection already, and we could leverage their expertise.
Focusing more on data governance:
GovAI now has a full-time researcher working on compute governance. Chinchillaās Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.
Whatās going on here? I can see two possible answers:Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and weāre better off putting our time into compute.
Someone might already be working on this that I just havenāt heard of.
If anyone has an answer to this Iād love to know!
No Plans for Misaligned AI:
This talk by Jade Leung got me thinkingāIāve never seen a plan for what we do if AGI turns out misaligned.
The default assumption seems to be something like āwell, thereās no point planning for that, because weāll all be powerless and screwedā. This seems mistaken to me. Itās not clear that weāll be so powerless that we have absolutely no ability to encourage a trajectory change, particularly in a slow takeoff scenario. Given that most people weight alleviating suffering higher than promoting pleasure, this is especially valuable work in expectation as it might help us change outcomes from āvery, very bad worldā to āslightly negativeā world. This also seems pretty tractableāIād expect ~10hrs thinking about this could help us come up with a very barebones playbook.
Why isnāt this being done? I think there are a few reasons:
Like suffering focused ethics, itās depressing.
It seems particularly speculativeāmost of the āhumanity becomes disempowered by AGIā scenarios look pretty sci-fi. So serious academics donāt want to consider it.
People assume, mistakenly IMO, that weāre just totally screwed if AI is misaligned.
Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:
Changes to hard law are difficult to reverseālegislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer.
At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances.
Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
This is worsened by the fact that we donāt know what ideal longtermist governance looks like. In a world of transformative AI, itās hard to tell if the rule of law will mean very much at all. If sovereign states arenāt powerful enough to act as leviathans, itās hard to see why influential actors wouldnāt just revert to power politics.
Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.
I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.
The Cannonball Problem:
Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We canāt see our target, weāve no idea how hard to push the barrel in any one direction, we donāt know how long the fuse is, we canāt stop the cannonball once itās in motion, and we could do some serious damage if we get things wrong.
Taking each of your points in turn:
Okay. Thanks for clarifying that for meāI think we agree more than I expected, because Iām pretty in favour of their institutional design work.
I think youāre right that we have a disagreement w/ār/āt scope and implications, but itās not clear to me to what extent this is also just a difference in āvibeā which might dissolve if we discussed specific implications. In any case, Iāll take a look at that paper.
I have a couple thoughts on this.
Firstāif youāre talking about nearer-term questions, like āWhatās the right governance structure for a contemporary AI developer to ensure its board acts in the common interest?ā or āHow can we help workers reskill after being displaced from the transport industryā then I agree that doesnāt seem too strange. However, I donāt see how this would differ from the work that folks at places LPP and Gov.AI are doing already.
Secondāif youāre talking about longer-term ideal governance questions, I reckon even relatively mundane topics are likely to seem pretty weird when studied in a longtermist context, because the bottom line for researchers will be how contemporary governance affects future generations.
To use your example of the future of work, an important question in that topic might be whether and when we should attribute legal personhood to digital labourers, with the bottom line concerning the effect of any such policy on the moral expansiveness of future societies. The very act of supposing that digital workers as smart as humans will one day exist is relatively weird, let alone considering their legal status, let further alone discussing the potential ethics of a digital civilisation.
This is of course a single, cherry-picked example, but I think that most papers justifying specific positive visions of the future will need to consider the impact of these intermediate positive worlds on the longterm future, which will appear weird and uncomfortably utopian. Meanwhile, I suspect that work with a negative focus (āHow can we prevent an arms race with China?ā) or a more limited scope (āHow can we use data protection regulations to prevent bad actors from accessing sensitive datasets?ā) doesnāt require this sort of abstract speculation, suggesting that research into ideal AI governance carries reputational hazards that others forms of safety/āgovernance work do not. Iām particularly concerned that this will open up AI governance to more hit-pieces of this variety, turning off potential collaborators whose first interaction with longtermism is bad faith critique.
Thanks for this Rory, Iām excited to see what else you have to say on this topic.
One thing I think this post is missing is a more detailed response to the āideal governance as weirdā criticism. You write that āweird ideal governance theories may well be ineffectiveā, but I would suggest that almost all fleshed-out theories of ideal AI governance will be inescapably weird, because most plausible post-trasformative AI worlds are deeply unfamiliar by nature.A good intuition pump for this is to consider how weird modern Western society would seem to people from 1,000 years ago. We currently live in secular market-based democratic states run by a multiracial, multigender coalition of individuals whose primary form of communication is the instantaneous exchange of text via glowing, beeping machines. If you went back in time and tried to explain this world to an inhabitant of a mediaeval European theocratic monarchy, even to a member of the educated elite, they would be utterly baffled. How could society maintain order if the head of state was not blue-blooded and divinely ordained? How could peasants (particularly female ones) even learn to read and write, let alone effectively perform intellectual jobs? How could a society so dependent on usury avoid punishment by God in the form of floods, plagues or famines?
Even on the most conservative assumptions about AI capabilities, we can expect advanced AI to transform society at least as much as it has changed in the last 1,000 years. At a minimum, it promises to eliminate most productive employment, significantly extend our lifetimes, allow us to intricately surveil each and every member of society, and to drastically increase the material resources available to each person. A world with these four changes alone seems radically different and unfamiliar to our own, meaning any theory about its governance is going to seem weird. Throw in ideas like digital people and space colonisation and youāre jumping right off the weirdness deep end.
Of course, weirdness isnāt per se a reason not to go ahead with investigation into this topic, but I think the Wildeford post you cited is on the right track when it comes to weirdness points. AI Safety and Governance already struggles for respectability, so if youāre advocating for more EA resources to be dedicated to the area I think you need to give a more thorough justification for why it wonāt discredit the field.
Also strong upvote. I think nearly 100% of the leftist critiques of EA Iāve seen are pretty crappy, but I also think itās relatively fertile ground.
For example, I suspect (with low confidence) that there is a community blindspot when it comes to the impact of racial dynamics on the tractability of different interventions, particularly in animal rights and global health.[1] I expect that this is driven by a combination of wanting to avoid controversy, a focus on easily quantifiable issues, the fact that few members of the community have a sociology or anthropology background, and (rightly) recognising that every issue canāt just be boiled down to racism.
Iām a bit late to the party on this one, but Iād be interested to find out how differential treatment of indigenous groups in countries where snakebites are most prevalent impacts the tractability of any interventions. I donāt have any strong opinions about how significant this issue is, but I would tentatively suggest that a basket of āethnic inequality issuesā should be considered a third āprongā in the analysis of why snakebites kill and maim so many people, and could substantially impact our cost-effectiveness estimates.
Explanation:
The WHO report linked by OP notes that, in many communities, over 3ā4 of snakebite victims choose traditional medicine or spiritual healers instead of hospital treatment. I donāt think this is a result of either of the two big issues that the OP identifiesāit doesnāt seem to stem from difficulty with diagnosis or cost of treatment, so much as being a thorny problem resulting from structural ethnic inequalities in developing countries.
Iām most familiar with the healthcare context of Amazonian nations, where deeply embedded beliefs around traditional medicine and general suspicion of mestizo-run governments can make it more difficult to administer healthcare to indigenous rainforest communities, low indigenous voter turnout reduces the incentives of elected officials to do anything about poor health outcomes, and discriminatory attitudes towards indigenous people can make health crises appear less salient to decisionmakers. Given that indigenous groups in developing countries almost universally receive worse healthcare treatment, and given that much indigenous land is in regions with high vulnerability to snake envenoming,[1] I wouldnāt be surprised if this issue generalised outside of Amazonia.Depending on the size of the effect here, this could considerably impact assessments of tractability. For example, if developing country governments wonāt pay for the interventions, it might be difficult to fund long-term antivenom distribution networks. Alternatively, if indigenous groups donāt trust radio communications, communicating health interventions could be particularly difficult. Also, given the fact that āindigenousā is a poorly-defined term which refers to a host of totally unrelated peoples, it might be difficult to generalise or scale community interventions.
Nothing to add, I just want to comment that this is a wonderful initiative. Thanks for setting this up!
Iām currently writing a sequence exploring the legal viability of the Windfall Clause in key jurisdictions for AI development. It isnāt strictly a red-team or a fact-checking exercise, but one of my aims in writing the sequence is to critically evaluate of the Clause as a piece of longtermist policy.
If Iād like to participate, would this sort of thing be eligible? And should I submit the sequence as a whole or just the most critical posts?
UK/āEuropean folksāif youāre looking for a second monitor, I recommend you buy one of these. They usually have a discount code, which makes them some of the best value on the market.
The only thing to keep in mind is that they eat up your battery pretty fast, which may not be ideal if you plan to use them for long stretches away from a plug socket.
Hi Christoph. I love this idea, and Iām a subscriber, but my DoneThat app keeps turning off at the start of each day, and I canāt work out how to stop it doing that. I also canāt see any way to contact support on your website. Is there an email I can contact about this?