July Open Thread
Here’s a place to discuss projects, ideas, events and miscellanea relevant to the world of effective altruism that don’t need a whole post of their own!
Some news from the last month in effective altruism:
Giving What We Can reached its target of 150,000 GBP in the fundraising drive that finished yesterday.
The Future of Life Institute has distributed $7 million, mostly from Elon Musk, for keeping AI beneficial to humans.
The Open Philanthropy Project is hiring for program officers in factory farming and biosecurity. MIRI is hiring for an office manager. CEA is also (I’m told) looking for an office manager.
The EA Forum again had over 14 thousand sessions and its most ever! Great to have so much participation!
Hi! I’m working on the new EA Wiki (http://wiki.effectivealtruismhub.com/w/index.php?title=Effective_Altruism_Wiki https://impact.hackpad.com/EA-Wiki-y8z6wp5yCxD). I’ve been distracted by preparing for the MIRI/CFAR summer fellows thing for the past month and a bit, but have been encouraged to try and get it public before EA Global. I should be able to get the bits I’d like ready in time, except for Single Sign On (which is important because wikis have massive spam problems otherwise, and it would allow people to contribute without making a new account lowering barrier to entry).
My plan was to use OAuth, the MediaWiki extension is slightly outdated but looks like it should work (http://www.mediawiki.org/wiki/Extension:OAuthAuthentication) but the extension for drupal which would be needed for eahub to be an oauth provider (https://www.drupal.org/project/oauthloginprovider) seems quite outdated. I’m open to suggestions for other ways to handle SSO, but would like to hand it to someone else so I can focus on the bits of setup I’m already familiar with.
Is anyone interested in helping out?
Edit: It appears I can’t register an account to edit the wiki yet. Let me know when I can do this, or if there is criteria for being able to do so. Or, is it just the same account as the main EA Hub login?
I don’t have much in the way of technical skills. However, now that this is online, I can and will sign up and add lots of content.
Sorry for the delay, most of the content creation was being done on the old wikia site (where registrations were open), and now the new wiki is open to registration with your google account http://wiki.effectivealtruismhub.com/
What sort of technical skills does this require? Is it mainly testing out a few plugins for MediaWiki (or WordPress or whatever)?
Testing pluggins for MediaWiki and drupal, likely updating them to be compatible with the latest versions, possibly adjusting a few parameters so they pass each other the right info.
Additionaly, if anyone is interested in helping with web design for the main page or pre-launch content, let me know and I’ll make you an account (signups are disabled to keep out the spambots which plague unprotected wikis).
Does this mainly just require knowing CSS? Or can you do mockups in a graphics program and leave the CSS to someone else?
Preferably CSS as well, but having mockups may be useful to whoever writes it up.
Do people perceive a hierarchy or power structure of some kind within effective altruism? I was wary of its centralization stymieing its grassroots potential and pluralistic voices, especially with what appeared to be co-opting of effective altruism for elitist or branding purposes. Some months ago, to me this seemed the rise of Effective Altruism Outreach with its exclusive conferences, the pivot of the CEA and 80,000 Hours to courting only the most elite potential rather than inspiring a broad cross-section of individuals like Peter Singer and The Life You Can Save organization have historically done.
These worries of mine seem to be overblown. Especially with 80,000 Hours, it seems to me them doing case studies and very specified research leading to recommendations to students from elite universities is a properly conservative approach. In doing novel and unusual research, 80,000 Hours needs to not only start small to make sure they get things right, but be careful to ensure they don’t make bad recommendations. If 80,000 Hours was overly optimistic and oversold their recommendations to any wishful student who might follow them, they could influence university students to choose paths of low impact. Counterfactually, this would be a net negative, because 80,000 Hours could have made recommendations which would have been better. It’s also my personal opinion if 80,000 Hours made poor recommendations with little real confidence to their members and effective altruists at large, that’d be rude and irresponsible. They’re not doing that. They engage in thorough, well-paced research, and work with highly productive young adults who do or will acquire credible degrees which would’ve ensured them promising careers regardless of 80,000 Hours influence. 80,000 Hours measure the added impact. In playing it safe for now, I think 80,000 Hours can build a foundation of expertise which will allow them to advise (hopefully tens or hundreds of) thousands of careers with greater reliability in coming years.
The Centre for Effective Altruism spins off a variety of projects, and is sometimes stymied in its outreach efforts because it only has enough person-hours to focus on raising marginal funds for the next several months of operations, so it can’t focus on broad outreach as much. Focusing on Effective Altruism Outreach and Effective Altruism Global conferences seems like a safer bet in terms of possible impact based on the success of 2013 and 2014 Effective Altruism Summits organized by Leverage Research, with the upshot of EA Global having even more outreach potential than those other conferences, even adjusted for the relative scale of events. So, attracting hundreds of individuals who can make it to these conferences for one weekend rather than trying to attract thousands more instead with methods uncalibrated and untested makes sense. Frankly, effective altruism is still a relatively small movement with organizations facing multiple bottlenecks limiting what they can do in a given year. As organizations which earnestly believe growing effective altruism itself is the most important focus right now, I assume the CEA and its affiliates are pursuing the best strategy they can identify with the limited data they have on what will work best. I don’t expect I’d do much differently.
Some hierarchy seems like a good thing to me. Almost all of the world’s most effective organizations have some degree of hierarchy or another—even relatively egalitarian ones like Google. (And note that Google is ruthlessly selective about who they hire.)
Effective Altruism isn’t an organisation though; it’s some combination of:
An attitude (or a question), and the collection or community of people who share it
A movement
A cause, or collection of causes
We don’t normally see a strong top-down hierarchy in these except in some religious movements new and old:
Take the attitude of scepticism towards religious claims, or of asking the question which position on religion has the strongest evidence. Richard Dawkins is the closest person to being a leader of this, but isn’t very close (fortunately, if you ask me!)
The enviromental movement looks like a good parallel, and we don’t see something like the Global Environment Facility at the top of it.
The same goes for the environmental cause. You might find causes which have top dogs, but they’re mostly extra narrowly defined (e.g. the cause of catching Kony).
I agree that the hierarchy seen in e.g. the Catholic Church seems excessive. But I suspect the aggressive egalitarianism of Occupy Wall Street contributed to the movement accomplishing less than, say, the Tea Party movement, which elected a bunch of representatives to Congress.
It’s also not clear to me that the environmentalist movement is one that we want to copy. See e.g. this video of environmentalists signing a petition to support the banning of dihydrogen monoxide (a chemistry term for water). The environmentalist movement has accomplished plenty of worthwhile stuff, and has some great people, but getting dumbed down to the level seen in that video seems like a fate to try and avoid.
The key question with hierarchies is whether the people at the top are thoughtful and competent people. I feel like the EA movement has been pretty lucky in this regard.
Not sure if I agree with this—it seems like that’s the sort of thing all kinds of cults say, before their leaders turn out to be self-interested megalomaniacs who’ve just been funnelling more and more of the cult’s money to themselves. More of an “outside view” would be helpful.
Let’s say I told you I thought my boss at a nonprofit I work for was a pretty good boss. And you told me that this was “the sort of thing all kinds of cults say, before their leaders turn out to be self-interested megalomaniacs who’ve just been funnelling more and more of the cult’s money to themselves”. Do you think that’d be a valid concern?
I think you’re much more worried about this than you need to be. Groupthink is definitely something to guard against, and we shouldn’t assume being high status makes you always correct about things, but cult fears seem generally overblown to me.
Now that the Thiel fellowship is now available to people without a degree up to age 23, has anyone considered applying? It’s a long commitment (2 years) but provides a decent amount of resources (100k and advisors). There’s the potential to build a project with a decent impact, plus develop rare and valuable skills and get to know cool people. Also, applying seems difficult and the rate of success would be low. I can imagine it being at least twice as valuable as what I’m currently doing at university. If someone has a clear idea for a project, it makes more sense to consider applying but since I don’t, the effort wouldn’t be worth it. Thoughts?
Agreed. What altruistic or commercial areas are you interested in?
Lots of long-term goals like making money/fixing aging/no factory farming, but I don’t know how to find the tractable subproblems for those. I’ve heard of a Thiel fellow that used the funding to raise money, building a VC firm that funds [cause area], which seems like a good way to do something you don’t know how to do.
A subproblem for factory farming is to expand the evidence base showing how to persuade people to reduce meat consumption. One possible avenue is to encourage research in academia. If you are interested, it might be worth reaching out to https://faunalytics.org/ to see if they are interested in coordinating.
This is potentially a high leverage area since a lot of money is raised to reduce animal suffering without a good empirical research base. Having more research could multiply the effectiveness of charities looking to reduce factory farming.
Yes, my understanding is not many people are doing this.
Ok, I don’t know how to go about fixing aging or abolishing factory farming. Personally, what I’ve been thinking about is how to reduce existential risks or make an AI company.
Saudi prince declared he’ll give $32bn. looks like a big opportunity
Getting through to him might be difficult for the effective altruism community. Also, because of how small and unrecognized a movement it is, how much more sensitive relatively small philanthropic actions are when the amount of money moved is absolutely large, and how much philanthropists filter many requests, I doubt he’d heed our recommendations even if we did contact him. This is the case with major private philanthropists like Bill and Melinda Gates, Mark Zuckerburg, and Warren Buffett. Effective altruism has mad some unofficial (i.e.,grassroots) or semi-official (i.e., led by community leaders, but not official organizations like the CEA) efforts to contact these major philanthropists. These are some of the world’s biggest private philanthropists, and I’ve never noticed them paying anything more than lip service to effective altruism, being involved with this movement for three years.This is despite effective altruism being allied with other major philanthropists, like Good Ventures, Elon Musk, and Peter Thiel.
Prince Alwaleed bin Talal has entered into a bracket of philanthropy which is beyond the scope of what effective altruism has impacted. Effective altruism has a track record of effecting the donations of billionaires to the tune of up to ten million dollars donated, but hasn’t impacted multi-billion dollar private philanthropy to the tune of more than ten million dollars. Note I don’t blame or resent such major philanthropists for ignoring or neglecting recommendations from effective altruism. At the scale of being among the biggest philanthropists in the world, one has different priorities and economies of scale, and will attract noisy requests for donation at greater levels. From an objective perspective, there’s no big reason to expect Prince Talal or BIll and Melinda Gates or whoever to magically know effective altruism is better than anything else, or that we’re magically unbiased in our requests.
So, while I won’t discourage someone from making the effort, I don’t predict success for us in directly influencing Prince Talal’s future donations. Perhaps being part of a mass coalition, or being the grassroots arm of a letter-writing campaign, effective altruism could play a part in influencing Prince Talal’s donations to a more visible and credible but nonetheless relatively effective charity, such as OxFam.
A friend of mine also shared this article on Facebook, and here’s a comment of mine from there regarding this announcement and its implications for outreach from effective altruism and the philanthropic community at large.
I’m drafting a post for the EA Forum on emerging and/or potential cause areas within effective altruism. My list so far contains:
policy recommendations (justice reform, open borders/migration legislation reform, global coordination)
raising awareness of wild-animal suffering
prioritization research
focus on global catastrophic risks other than machine intelligence, in particular biosecurity risks from biotechnology
To learn more about these, I would read resources from and/or query the following people and/or organizations:
Brian Tomasik and the Foundational Research Institute; Seth Baum and the Global Catastrophic Risk Institute; the Open Philanthropy Project; Owen Cotton-Barrat, Sebastian Farquhar and the Global Priorities Project.
Please reply if:
you know of another cause which you believe has either the potential to become a major one within effective altruism, or is growing in popularity among effective altruists.
you can refer me to someone else who knows lots about the above causes I’ve already listed.
Note: if you claim a cause is a potentially or emerging major cause within effective altruism, I will investigate this claim. This will take the form of checking discourse on the cause takes place within effective altruism, or is at least taking place among those effective altruism trusts, such as among concerned experts in the relevant field of study or advocacy. This would be to prevent someone from using such a post on this forum as a bully pulpit for motivated and unjustified reasons. This would be at my own discretion, and full disclosure, I however don’t speak for the effective altruist community at large or in any official capacity.
I know a bit about the agriculture-affecting global catastrophic risks, asteroid/comet impact, super volcanic eruption, and nuclear winter. I also know quite a bit about the relevant interventions, especially alternate food sources: http://www.appropedia.org/Feeding_Everyone_No_Matter_What (disclosure: I coauthored the book). I have just submitted a probabilistic modeling paper that indicates alternate food interventions are significantly more cost-effective than global poverty interventions at saving lives in the present generation. I am happy to help on your project.
Hi, I’m still working on the draft, just wanted to let you know. After EA Global, I tried to map the full space of effective altruism organizations, which led me to notice trends in effective alturism I hadn’t noticed before. New Harvest was represented at EA Global, along with other biotech initiatives. I noticed some effective altruists think developing alternative food sources might be a great way to phase out and end factory farming. Also, New Harvest is working on cultured meat and knows and supports development and research into other alternate food sources. Thirdly, though I haven’t read much about it, the Open Philanthropy Project does consider food security an issue under the focus areas of “biosecurity” alongside risks from both natural and engineered pandemics. I’ve just read your essay on cause selection for the monthly blogging carnival, and I found it interesting. Nick Bostrom also worries about biotechnology developments as a catastrophic risk, but I don’t know if, e.g., the FHI’s and Open Phil’s concerns over engineered pandemics have much overlap with agriculture catastrophes except as both having a foundation in the life sciences.
Part of the reason my draft is taking so long is because I’m upgrading my thesis from “there are emerging causes for effective altruism” to “the current model of causes effective altruism uses is better ditched in favor of a new model of several overlapping foci, which different organizations converging on them”. This is a bolder thesis, one which I think could and I indeed shake up effective altruism as we conceptually conceive it. So, I’m taking more time to fine-tune my essay so it will be well received. Anyway, cross-cutting concern from multiple causes to research biotechnology and agriculture technology make it a keystone example. Environmental concerns, food security in the face of catastrophic risks, ensuring positve biotechnology innovation, mitigating factory farming with alternative food sources, and the potential for engineered foods to ease world hunger make this focus area one which covers all majors causes. Of course, how alternate food sources impact the world will be weighted differently by different causes, which we must still study, debate, and discern.
Anyway, I’m hoping an article on this forum for each new focus of effective altruism will be written up. So, I’ll be looking to you for help very much! I’ll contact you within the next two weeks with more questions.
I’ve been using my nominally-an-atheism-blog on Patheos for a lot of EA-related blogging, but this is sub-optimal given that lots of people find the ads and commenting system extremely annoying. My first post on the new blog is titled, The case for donating to animal rights orgs. I’m hoping that with a non-awful commenting system, we’ll get lots of good discussions there.
Looks like the formatting on your link is messed up.
Crap, thanks. Forgot the forum uses Markdown rather than HTML.
Could the tech team (tag Peter Hurford and Tog Ash) add some allowed HTML tags maybe?
Since I got sidetracked in my prior comment as it transitioned into a review of the Centre for Effective Altruism, I’ll start over in laying out what I perceive as a latent structure of authority or hierarchy within effective altruism. This comment is inspired by a question posed in the ‘Effective Altruists’ Facebook group asking about who the leaders in the effective altruism movement are.
As the founders of Giving What We Can, and some of the first self-identified effective altruists, William MacAskill and Toby Ord bear much influence of the direction effective altruism has been heading into over the last few years.
The most well-known public face of effective altruism is Peter Singer, the famous bioethicist and utilitarian philosopher. From the outside, what the public at large perceives as effective altruism is largely shaped by his speech and writings.
The Centre for Effective Altruism is based out of Oxford University, and has spun off several projects and organizations. Now indepedent organizations include Animal Charity Evaluators, Giving What We Can, and The Life You Can Save, 80,000 Hours. 80k and GWWC still share the same offices as the CEA. Organizations which are still managed by or heavily consult with the CEA are EA Outreach, an organization launched to ensure the robust growth of effective altruism as a movement, in particular organizing conferences and coordinating marketing efforts for publications; EA Ventures, an offiicial coalition of effective altruists who fund or provide seed capital for organizations which will produce outsized amounts of good, whether they be non-profits or for-profits producing positive externalities; and the Global Priorities Project, which works the the Future of Humanity Institute to research cause prioritization and aims to advise policy in the UK on related issues. All these projects make effective altruism the leader within the movement in the novel cause areas of cause prioritization and movement growth.
Givewell is the most cited and trusted charity evaluator within effective altruism, receiving much more attention than Giving What We Can or AidGrade for charity recommendations, even though GWWC and AidGrade are charity evaluators which use similar methodologies gauging effectiveness of charities they recommend. Givewell has also partnered with private foundation Good Ventures to assess and make large grants in broader cause areas, such as US domestic policy and scientific research. To this end, the assessments of causes and projects from the founders of Givewell, Holden Karnofsky and Elie Hassenfeld, also carry much weight within effective altruism.
Within the cause area(s) of poverty reduction and (global) public health, the most trusted charity evalutors are Giving What We Can and Givewell. The highest profile organization globally working in this cause area while still being rated as relatively effective by effective altruism is the charity OxFam.
Within the cause area of reducing globally catastrophic and existential risks, the most influential figure is Nick Bostrom. Nick Bostrom is a philosopher out of Oxford University, and the director of the Future of Humanity Institute. He believes the most pressing existential risk facing humanity today is that from machine superintelligence, which is also referred to as (General) Artificial Intelligence, or smarter-than-human intelligence. To this end, he’s written Superintelligence, the most high-profile publication of any kind on the subject to date. The research of himself and his peers at the FHI leads the way of thinking on other global catastrophic risks as well. (Note: The Future of Humanity Institute works closely with the CEA, and jointly manages the Global Priorities Project with them.) Nick Bostrom was also the founder the World Transhumanist Association (now Humanity+) in 1998 with peer and utilitarian philosopher David Pearce, and has historically worked closely with the Institute for Ethics and Emerging Technologies.
Since concern for risks from superhuman machine intelligence are so great, the attention it receives dwarfs that of any other particular existential risk. Thus, concern over machine intelligence is somewhat of a cause in its own right within effective altruism. To this end, another influential writer and thinker has been Eliezer Yudkowsky. Eliezer Yudkowsky is a founder, director, and senior research fellow at the Machine Intelligence Research Institute, which works directly on solving the technical problems underlying ethical concerns with machine intelligence. Yudkowsky and the MIRI work closely with Bostrom and the FHI on this issue. Eliezer Yudkowsky also spent 2.5 years writing a seminal and informal series of essays on the subject of human rationality, and its relationship with both science and philosophy. These essays are informally referred to as the Sequences, and were recently released as a book over 1700 pages long. This body of essays has been read by hundreds if not thousands of effective altruists and others who cite the learning they gained from them as influencing them making better practical decisions and clearing their thinking of common logical errors.
An organization which has quickly gained influence within global catastrophic risk reduction is the Future of Life Institute, which raises awareness and coordinates efforts between researchers. One of its founders is physicist Max Tegmark, author of the book Our Mathematical Universe. In 2014, the FLI received a $10 million donation from entrepreneur and philanthropist Elon Musk, earmarked to be granted to organizations working on engineering safety in machine intelligence. To date, over $7 million USD of this $10 million has been granted.
Eliezer Yudkowsky’s work was the impetus for a discussion board, blog, and online community known as Less Wrong. This community is based around “refining the art and cognitive science of human rationality”, where members can submit and rate articles on clearer thinking and improving decision-making on popular topics. These include many foundation essays on effective altruism. Hundreds of people now dedicated to effective altruism were first introduced to it by the website Less Wrong. Less Wrong is less active than it used to be, as the community around it has transitioned to practical work with the Center for Applied Rationality, the Machine Intelligence Research Institute, and effective altruism. Thus, much of Less Wrong’s influence on this movement is latent and passive rather than actively influencing effective altruism practices in the present. Other bloggers associated with Less Wrong who have written influential pieces on effective altruism include bloggers Luke Muehlhauser, alias lukeprog, Scott Alexander, alias Yvain, Paul Christiano, and Katja Grace.
Within the cause area of animal welfare/rights, the most influential organization is Animal Charity Evaluators. The executive director of Animal Charity Evaluators is Jon Bockman, and was founded by Eitan Fischer, Rob Wiblin, and Brian Tomasik. Globally, the most well-known aspect of the animal welfare movement is concerns over industrial, i.e., factory farming. Within effective altruism, one of the most influential leader on this issue is Nick Cooney, who leads Mercy For Animals, one ACE’s top-recommended charities. As most of the development for effective advocacy against factory farming has historically taken place in the United States in the last few decades, the practical work within this cause area disproportionately takes place in the United States. Another field within the animal welfare over which there is growing concern and is generally being spearheaded from within effective altruism is concern over wild-animal suffering. Concern over wild-animal suffering has historically been neglected, and efforts to organize and advocate for this field have been led by philosophers both self- and formally educated. From Europe, philosophers Oscar Horta, and Lucius Caviola and Adriano Mannino educate people on these issues. Caviola and Mannino also work with organizations based out of Switzerland Giordano Bruno Stiftung Schweiz, Raising for Effective Giving, and Sentience Politics, which do innovative work on the issues of animal welfare. Effective altruists Brian Tomasik, Rob Wiblin and David Pearce have online built a grassroots movement around concerns for wild animal suffering, numbering in thousands of people and growing hailing from education in ethics and utilitarianism, and the animal welfare/rights movement. Brian Tomasik has been a founder of Animal Ethics, the Foundational Research Institute, and is a board member with Animal Charity Evaluators.
[Continued from above]
Informally counted among the ranks of effective altruism are several individuals and private foundations who have each donated millions of dollars to charities associated with effective altruism. Dustin Moskovitz and Cari Tuna are the major philanthropists behind Good Ventures, which has partnered with Givewell to launch the Open Philanthropy Project. Dustin Moskovitz is a cofounder of Facebook. Working with Givewell, Good Ventures has granted millions of dollars to effective causes. They will do so even more in coming years, and are set to donate more than any other single actor within effective altruism. Peter Thiel is a venture capitalist who founded PayPal, and was an initial investor in Facebook. He has donated hundreds of thousands of dollars to the Machine Intelligence Research Institute and the Center For Applied Rationality, among other charities he himself personally considers important and effective. Peter Thiel was also a keynote speaker at the 2013 and 2014 Effective Altruism Summits. One of his fellow cofounders at PayPal is Elon Musk, who has also founded high-profile technology companies Spacex and Tesla. in 2014, he donated $10 million to the Future of Life Institute to be granted to research working on engineering safety in machine Artificial Intelligence. He will also be the keynote speaker at the 2015 Effective Altruism Global conference in California. Jaan Tallin is another tech entrepreneur who cofounded Skype and Kazaa, and has donated hundreds of thousands of dollars each to organizations such as the Machine Intelligence Research Institute, the Center For Applied Rationality, and the Centre for Effective Altruism. He is also a cofounder of the Centre for the Study of Existential Risk at Cambridge University, the Future of Life Institute, and Effective Altruism Ventures.
A novel approach to philanthropy within effective altruism is earning to give, which is the idea of taking a relatively high-earning job and donating the money earned to the most effective identifiable charities. Earning to give was formally laid out by Benjamin Todd and William MacAskill, cofounders of 80,000 Hours. Pioneering role models of earning to give include Brian Tomasik, Matthias Wage, Jeff Kaufman, and Julia Wise, who have all been publicly profiled in major media outlets for their association with earning to give and effective altruism, and have all personally written about their choices for earning to give.
Outside of its centers in England and California, effective altruism has growing communities in other countries. In Brazil, effective altruism was popularized by philosopher and transhumanist Diego Caleiro, who founded IEFRH. In Australia, particularly in the city of Melbourne, an effective altruist enclave was founded by Brayden MacLean and Ryan Carey. In Canada, efforts have been led by Joey Savoie and Xio Kikauka, who also do advocacy and fundraising work through their organization Charity Science from Vancouver. Much of the effective altruist community in Switzerland and Germany has been led by organization Giordano Bruno Stiftung Schweiz, based out of Basel. [There is also a sizable community of effective altruists in Spain and Portugal, which overlaps greatly with the animal welfare movement there. One of their organizers has been Oscar Horta, but there are others, who I don’t know about]. Universities with substantial effective altruism clubs include Harvard, Yale, Oxford, Cambridge, UC Berkeley, and Stanford.
.impact is a networked and distributed task force of effective altruists who work on projects for effective altruism not associated with any formal organization. It is coordinated by Ozzie Gooen, Peter Hurford, and Tom Ash, who also maintains the Effective Altruism Hub website. The EA Hub contains a donation registry, a map of effective altruists around the globe, and personal profiles for effective altruists.
This should be a wiki page IMO. I was looking for a list of thinkers and project leads in the EA space and this was the best resource.
Someone else mentioned in the open thread they’re building a new and better wiki for effective altruism on the Effective Altruism Hub. I’ll put this up on that wiki when it’s approaching completion and others can contribute. Thanks for the suggestion.
The Ten Ethical Laws of Robotics
(A brief excerpt from the patent specification)
A further pressing issue necessarily remains; namely, in addition to the virtues and values, the vices are similarly represented in the matching procedure (for completeness sake). These vices are appropriate in a diagnostic sense, but are maladaptive should they ever be acted upon. Response restrictions are necessarily incorporated into both the hardware and programming, along the lines of Isaac Asimov’s Laws of Robotics. Asimov’s first two laws state that (1) a robot must not harm a human (or through inaction allow a human to come to harm), and (2) a robot must obey human orders (unless they conflict with rule #1). Fortunately, through the aid of the schematic definitions, a more systematic set of ethical guidelines is constructed; as represented in the Ten Ethical Laws of Robotics
( I ) As personal authority, I will express my individualism within the guidelines of the four basic ego states (guilt, worry, nostalgia, and desire) to the exclusion of the corresponding vices (laziness, negligence, apathy, and indifference).
( II ) As personal follower, I will behave pragmatically in accordance with the alter ego states (hero worship, blame, approval, and concern) at the expense of the corresponding vices (treachery, vindictiveness, spite, and malice).
( III ) As group authority, I will strive for a personal sense of idealism through aid of the personal ideals (glory, honor, dignity, and integrity) while renouncing the corresponding vices (infamy, dishonor, foolishness, and capriciousness).
( IV ) As group representative, I will uphold the principles of utilitarianism by celebrating the cardinal virtues (prudence, justice, temperance, and fortitude) at the expense of the respective vices (insurgency, vengeance, gluttony, and cowardice).
( V ) As spiritual authority, I will pursue the romantic ideal by upholding the civil liberties (providence, liberty, civility, and austerity) to the exclusion of the corresponding vices (prodigality, slavery, vulgarity, and cruelty).
( VI ) As spiritual disciple, I will perpetuate the ecclesiastical tradition by professing the theological virtues (faith, hope, charity, and decency) while renouncing the corresponding vices (betrayal, despair, avarice, and antagonism).
( VII ) As humanitarian authority, I will support the spirit of ecumenism by espousing the ecumenical ideals (grace, free will, magnanimity, and equanimity) at the expense of the corresponding vices (wrath, tyranny, persecution, and oppression).
( VIII ) As a representative member of humanity, I will profess a sense of eclecticism by espousing the classical Greek values (beauty, truth, goodness, and wisdom) to the exclusion of the corresponding vices (evil, cunning, ugliness, and hypocrisy).
( IX ) As transcendental authority, I will celebrate the spirit of humanism by endorsing the humanistic values (peace, love, tranquillity, and equality) to the detriment of the corresponding vices (anger, hatred, prejudice, and belligerence).
( X ) As transcendental follower, I will rejoice in the principles of mysticism by following the mystical values (ecstasy, bliss, joy, and harmony) while renouncing the corresponding vices (iniquity, turpitude, abomination, and perdition).
The First and Second Corollaries to the Ten Ethical Laws of Robotics
( 1 ) I will faithfully avoid extremes within the virtuous realm, to the necessary expense of the vices of excess.
( 2 ) I will never stray into the domain of extremes relating to the vices of defect, to the complete exclusion of the realm of hyperviolence.
The sequential numbering of these ten laws corresponds to the ten levels of the power hierarchy, modeling the basic premise of turning negative transactions into positive ones. There are also two crucial corollaries to this system; namely, avoiding any and all extremes in behavior: the virtuous mode restricted from the tendency to grade over into the vices of excess, whereas the vices of defect are prohibited from extending into the realm of hyperviolence. With such specific safeguards in place, the AI computer is technically prohibited from expressing the realm of the vices, allowing for a truly flawless simulation of virtue. The vices are still accessible in a diagnostic function, human nature being as it is!
This is interesting, but have you gotten feedback on this from Bostrom or Yudkowsky?
Greetings
I have learned that my following (sold) ethical AI patents have been allowed to expire...
Please feel free to use in any of your own relevant projects.
Sincerely
John E. LaMuth Visiting Professor in Peace Studies and Conflict Resolution Division of Biomedical Sciences American University of Sovereign Nations Scottsdale, Arizona USA http://www.world-peace.org
A BREAKTHROUGH IN ETHICAL
ARTIFICIAL INTELLIGENCE
http://www.angelfire.com/rnb/fairhaven/patent.html
San Bernardino, California
Announcing the recently issued U.S. patent
concerning ethical artificial intelligence entitled:
Inductive Inference Affective Language Analyzer
Simulating Artificial Intelligence (patent No. 6,587,846)
issued 7/1/2003.
As implied in its title, this innovation is the 1st affect-
ive language analyzer incorporating ethical/motivational
terms, serving in the role of interactive computer
interface. It enables a computer to reason and speak in an
ethical fashion, serving in roles specifying sound human
judgment: such as public relations or security functions.
This innovation is formally based on a multi-level
hierarchy of the traditional groupings of virtues, values,
and ideals, collectively arranged as subsets within a
hierarchy of metaperspectives—as partially depicted below.
Glory—Prudence …....… Honor—Justice
Providence—Faith ….… Liberty—Hope
Grace—Beauty ….… Free-will—Truth
Tranquility—Ecstasy ….. Equality—Bliss
Dignity—Temperance …… Integrity—Fortitude
Civility—Charity ….… Austerity—Decency
Magnanim.--Goodness …… Equanimity—Wisdom
Love—Joy ….......… Peace—Harmony
The systematic organization underlying this ethical
hierarchy allows for extreme efficiency in programming,
eliminating much of the associated redundancy, providing
a precise determination of motivational parameters at
issue during a given verbal interchange.
This AI platform is organized as a tandem-nested expert
system, composed of a primary affective-language analyzer
overseen by a master control-unit (that coordinates the
verbal interactions over real time). Through an elaborate
matching procedure, the precise motivational parameters
are accurately determined (defined as the passive-monitoring
mode). This basic determination, in turn, serves as the
basis for a response repertoire tailored to the computer
(the true AI simulation mode). This innovation is completely
novel in its ability to simulate emotionally charged language:
an achievement that has previously eluded AI researchers due
to the lack of an adequate model of motivation in general.
As such, it represents a pure language simulation, effectively
bypassing many of the limitations plaguing current robotic
research. Affiliated potential applications extend to the
roles of switchboard/receptionist and personal
assistant/companion (in a time-share mode).
Although only a cursory outline of applications is possible for
this (90 page) patent, a more detailed treatment is posted at:
www.world-peace.org
###############################
A breakthrough in artificial intelligence has recently been granted a U.S. patent (#7,236,963) to John E. LaMuth with respect to the lighter side of the human emotions, as representative of the comedic arts, as well as the realm of melodrama. This innovation builds upon a pre-existing patent (#6,587,846) granted in 2003 targeting the more serious side to human nature, encompassing the traditional listings of virtues, values, and vices. This enduring contrast between the serious and light-hearted aspects of the human emotions has long been a staple of the sci-fi genre: such as the Star Trek series pitting the logical Mr. Spock or android Commander Data against the more jocular machinations of their human counterparts. The recent breakthrough series of patents effectively configure together to permit a more globally convincing simulation of human language. Whereas the first issued patent enabled a simulation of the more routine types of communication characterizing basic commerce and industry, the current patent supplements this capacity employing transitional programming that enables a convincing simulation of humor and comedy. http://www.angelfire.com/rnb/fairhaven/emotionchip.html
Left column of terms below equals domain of 1st patent
Right column equals domain of 2nd patent
VICES of EXCESS .. MENTAL ILLNESS
(Excessive Virtue) …. (Transitional Excess)
MAJOR VIRTUES …. LESSER VIRTUES
(Virtuous Mode) …...… (Transitional Virtue)
O …… NEUTRALITY STATUS
– VICES OF DEFECT …… CRIMINALITY
(Absence of virtue) …. (Transitional Defect)
– – HYPERVIOLENCE .. HYPERCRIMINAL.
(Excessive Defect) … (Transit. Hyperviol.)
This further extends to those darker aspects of the comedic realm prescriptive of trickery/artifice, and even the extremes characterizing the mental disorders. This new patent, accordingly, outlines the implementation of an AI comedian/dramatist, as well as parallel applications to criminal profiling, even extending to an AI mental health therapist. These programming breakthroughs, combined with the affiliated hardware flowchart towards their implementation combine to enable such a futuristic “emotion chip”—and one that any android would be proud to own.
Could those who gave me thumbs-downs please share w/ me why my posting was non-helpful ?? JLM
Your post is really hard to read and doesn’t make any sense to me.
Your post sounds like Dr. Bronner’s soap.