Adaptation: Assuming that advanced AI would preserve humanity is the same as an ant colony assuming that real estate developers would preserve their nest. Those developers don’t hate ants, they just want to use that patch of ground for something else (I may have seen this ant analogy somewhere else but can’t remember where).
Arran McCutcheon
If the capabilities of nuclear technology and biotechnology advance faster than their respective safety protocols, the world faces an elevated risk from those technologies. Likewise, increases in AI capabilities must be accompanied by an increased focus on ensuring the safety of AI systems.
Human history can be summarised as a series of events in which we slowly and painfully learned from our mistakes (and in many cases we’re still learning). We rarely get things right first time. The alignment problem may not afford the opportunity to learn from our mistakes. If we develop misaligned AGI we will go extinct, or at the very least cede control of our destiny and miss out on the type of future that most people want to see.
Givewell for AI alignment
Artificial intelligence
When choosing where to donate to have the largest positive impact on AI alignment, the current best resource appears to be Larks annual literature review and charity comparison on the EA/LW forums. Those posts are very high-quality but they’re only published once a year and are ultimately the views of one person. A frequently updated donation recommendation resource contributed to by various experts would improve the volume and coordination of donations to AI alignment organisations and projects.
This is probably not the first time this idea has been suggested but I haven’t seen it explicitly mentioned within the current project ideas or commented suggestions. Refinement of idea #29.
Website for coordinating independent donors and applicants for funding
Empowering exceptional people, effective altruism
At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.
Research scholarships / funding for self-study
Empowering exceptional people
The value of a full-time researcher in some of the most impactful cause areas has been estimated as being between several hundred thousand to several million dollars per year, and research progress is now seen by most as the largest bottleneck to improving the odds of good outcomes in these areas. Widespread provision of scholarships / funding for self-study could enable far more potential researchers to gain the necessary experience, knowledge, skills and qualifications to make important contributions. Depending on the average amount granted to scholarship / funding applicants, even a hit rate of 5-10% (in terms of creating full-time researchers in high impact cause areas) could be a good use of funds.
EA Funds and other orgs already do this to some extent, I’m envisaging a much wider program.
Thanks Khorton for the feedback and additional thoughts.
I think the impact of cold emails is normally neutral, it would have to be a really poorly-written or antagonising email to make the reader actively go and do the opposite of what the email suggests! I guess neutral also qualifies as ‘not good’.
But it seems like people with better avenues of contact to DC have been considering contacting him anyway, through cold means or otherwise, so that’s great.
Exactly, he has written posts about those topics, and about ‘effective action’, predictions and so on. And there is this article from 2016 which claims ‘he is an advocate of effective altruism’, although it then says ‘his argument is mothball the department (DFID)’, which I’m fairly sure most EAs would disagree with.
But as he’s also written about a huge number of other things, day-to-day distractions are apparently the rule rather than the exception in policy roles, and value drift is always possible, it would be good to have someone on his team, or with good communication channels to them, who can re-emphasise these issues (without publicly associating EA with Cummings or any other political figure or party).
Although the blog post is seeking applications for various roles, the email address to send applications to is ‘ideas for number 10 at gmail dot com’.
If someone/some people took that address literally and sent an email outlining some relatively non-controversial EA-aligned ideas (e.g. collaboration with other governments on near-term AI-induced cyber security threats, marginal reduction of risks from AI arms races, pandemics and nuclear weapons, enhanced post-Brexit animal welfare laws, maintenance of the UK’s foreign aid commitment and/or increased effectiveness of foreign aid spending), would the expectancy of that email be positive (higher chance of above policies being adopted), negative (lower chance of above policies being adopted) or basically neutral (highly likely to be ignored or unread, irrelevant if policies are adopted due to uncertainty over long term impact)?
I’m inclined to have a go unless the consensus is that it would be negative in expectation.
Thanks for sharing. Definitely more research like yours and WAI’s is needed regarding what species and stages of development within species are likely to experience suffering, and how we should view the importance of moderate/extreme suffering/pleasure.
As far as I’m aware, the lives of invertebrates are considered likely to be net negative due to r-selection (most (all?) species reproduce by having a large amount of offspring, most of whom die at a very young age), and short lifespans in general, which tend to end in painful death by dehydration, being eaten alive etc (the extreme suffering involved in this type of death is thought to typically outweigh any positive aspects of the individual’s short life).
I don’t know of any explicit calculations apart from Charity Entrepreneurship’s weighted animal welfare index (http://www.charityentrepreneurship.com/blog/is-it-better-to-be-a-wild-rat-or-a-factory-farmed-cow-a-systematic-method-for-comparing-animal-welfare) which includes an estimate for ‘wild bug’ (spoiler alert: it’s net negative). Are you able to share some of your calculations?
In his 2007 paper ‘Protagonistic Pleiotropy’, de Grey comments on ‘...the concept of antagonistic pleiotropy (AP) proposed by Williams (Williams, 1957) and now recognised to play a widespread role in aging’. He also mentioned it in an interview posted on fightaging.org last year.
So presumably antagonistic pleiotropy is already accounted for in the SENS Foundation’s ongoing work in developing repair technology, even if it’s not defined as one of the seven major classes of cellular and molecular damage in its own right.
That’s right, he said ‘It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill.’