CEO of Rethink Priorities
Marcus_A_Davis
I don’t disagree on the problems of getting someone who thinks there is “negligible probability” of AI causing extinction being not suited for the task. That’s why I said to aim for neutrality.
But I think we may be disagreeing over whether “thinks AI risk is an important cause” is too close to “is broadly positive towards AI risk as a cause area.” I think so. You think not?
This survey makes sense. However, I have a few caveats:
Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn’t have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.
Why should the person overseeing the survey think AI risk is an important cause? Doesn’t that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with AI risk (and, obviously, competent to judge who to include in the survey)? The ideal person to me would be neutral and while of course finding someone who is truly neutral would likely prove impractical, selecting someone overtly positive would be a bad idea for the same reasons it would be to select someone overtly negative. The point is the aim should be towards neutrality.
They should also have a chance to comment on the survey itself >before it goes out. Ideally it >would be checked by someone who understand good survey >design, as subtle aspects of >wording can be important.
This should be a set time frame to draft a response to the survey before it goes public. A “chance” is too vague.
It should be impressed on participants the value of being open and thoughtful in their answers >for maximising the chances of solving the problem of AI risk in the long run.
Telling people to be open and thoughtful is great, but explicitly tying it to solving long run AI risk primes them to give certain kinds of answers.
It’s complicated, but I don’t think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.
I did mean over outcomes. I was referring to this:
If we’re uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.
That seems mistaken to me but it could be because I’m misinterpreting it. I was reading it as saying we should split the difference between the two probabilities of success Matthews proposed. However, I thought he was suggesting, and believe it is correct, that we shouldn’t just pick the median between the two because the smaller number was just an example. His real point being that any tiny probability of success seems equally as reasonable from the vantage point of now. If true we’d then have to split our prior evenly over that range instead of picking the median between 10^-15 and 10^-50. And given it’s very difficult to put a lower bound on the reasonable range but a $1000 donation being a good investment depends on a specific lower bound higher than he believes can be justified with evidence, some people came across as unduly confident.
But if it’s even annoying folks at EA Global, then probably people ought to stop using them.
Let me be very clear, I was not annoyed by them, even if I disagree, but people definitely used this reasoning. However, as I often point out, extrapolating from me to other humans is not a good idea even within the EA community.
I think you are short selling Matthews on Pascal’s Mugging. I don’t think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.
Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn’t arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn’t taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom). What those numbers are is very difficult to tell but if the estimation of those boundaries is off, and given the record of future predictions of technology it’s not implausible, then all current donations could end up doing basically nothing. In other words, his critique is not that we must give up in the face of uncertainty but that the the justification of AI risk reduction being valuable right now depends on a number of assumptions with rather large error bars.
Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way. I believe that was his point about Pascal’s Mugging. And while you criticized him for not acknowledging MIRI does not support Pascal’s Mugging reasoning to support AI research, he never said they did in the article. He said many people at the conference replied to him with that type of reasoning (and as a fellow attendee, I can attest to a similar experience).
*Normally, I believe, it would be all logically possible outcomes but obviously it’s unreasonable to believe a $1000 donation, which was his example, has, say, a 25% chance of success given everything we know about how much such work costs, etc. However, where the lower bound is on this estimation is far less clear.
This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.
Also, “What is the first thing you would do if appointed dictator of the United States?” likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.
That counts. And, as I said above to Ben, I should have been more broad anyway. I just think we can use more first-person narratives about earning to give to present the idea as less of an abstraction.
Of course, I could be wrong and those who would consider earning to give at all (or would be moved to donate more because of hearing such a story) would be equally swayed by a third person analysis of why it is a good idea for some people.
That would count but I should have been more broad in my statement anyway. People like the “here’s what I did and why I did it narrative” and earning to give could use more of these stories in general. I think a variety of them showing different perspectives for people in different positions and different abilities would be a boon.
Btw, I was quite wrong about there being no first person accounts as, for one, Chris Hallquist has written about this extensively.
As for my personal experience with .impact here’s a brief summary:
I’m still relatively new to .impact but I actually don’t recall with clarity how I found it. I believe, with barely over 50% confidence, that Peter Hurford told me about it. So far I’ve found it very welcoming and bursting with ideas and people willing to help. And if you review the meeting notes, over any significant period of time it is clear many things are getting accomplished. However, even with the ability to search all of Hackpad for projects, finding things by project type can be difficult if you don’t know where to look (an Index page sorting projects by type might help). As it stands right now, the easiest way to find something for outsiders and newcomers is often just to ask someone.
Also for a newcomer, particularly one like myself who doesn’t currently offer any particularly in-demand skills like web design or programming, it can be difficult to know what exactly to do if you arrive just looking to help. However, I found the answer to this, as many things in life, is to just dive in. If you think you can do it and have the time, volunteer. That’s how I ended up writing this post and moderating this forum. It really is the case that if you have the time, there’s probably something you could be working on.
Reintroducing .impact
As someone currently in the process of learning programming here are a few thoughts on my attempt at learning two of the bolded languages, Java and Ruby:
I’m currently working through The Odin Project, which has a backend focus on Ruby, and I’d highly recommend it. I’d also recommend Peter’s guide to TOP which I’ve found very useful which includes some time estimates, some additional resources and some things to learn after you complete TOP. Perhaps the biggest plus to TOP for me is giving projects of the correct difficulty at the correct time so that they are challenging but doable. Another of the biggest benefits of TOP is the sheer scope of the resources already collected for you. Also Ruby is far more intuitive than Java.
Before starting TOP I started learning programming by attempting to learn Java on my own without much structure. However, going on my own I’d often spend time attempting to track down a good explanation for topics. There was also the issue of not knowing what was a logical path to take to learning and I think I took some major false steps. The resource I found most beneficial during that time were probably the free courses at Cave of Programming which covered a wide range of topics but had the huge downside of being somewhat dated video tutorials. Other than that I didn’t find lots of free resources to help learning Java but there are some pretty cheap stuff on Udemy and a subscription to Lynda could be a good investment as well.
Of course, a huge caveat, I am a sample size of one who had no experience at all with programming before starting with Java. People with different backgrounds may have very different experiences.
There is also a contingent of utilitarians within effective altruism who primarily care >about reducing and ending suffering. They may be willing to compromise in favor of >animal welfare, and not full rights, but I’m not sure. They definitely don’t seem a >majority of those concerned with animal suffering within effective altruism.
Of course, only actual data on EAs could demonstrate the proportionate of utilitarians willing to compromise but this seems weird. To me it would seem utilitarianism all but commits you to accept “compromises” on animal welfare at least in the short term given historical facts about how groups gained ethical consideration. As far as I know (anyone feel free to provide examples to the contrary), no oppressed group has ever seen respect for their interests go from essentially “no consideration” (where animals are today) to “equal consideration” without many compromising steps in the middle.
In other words, a utilitarian may want the total elimination of meat eating (though this is also somewhat contentious) but in practice they will take any welfare gains they can get. Similarly, utilitarians may want all wealthy people to donate to effective charities until global poverty is completely solved but will temporarily “compromise” by accepting only 5% of wealthy people to donate 10% of their income to such charities while pushing people to do better.
So, in practice, utilitarianism would mean setting the bar at perfection (and publicly signaling the highest standard that advances you towards perfection) but taking the best improvement actually on offer. I see no reason this shouldn’t apply to the treatment of animals. Of course, other utilitarians may disagree that this is the best long term strategy (hopefully evidence will settle this question) but that is an argument about game theory and not whether some improvement is better than none or if settling for less than perfection is allowable.
Ah, I should have guessed that from the “this is being actively pursued” label or I could have just asked there.
Naturally, if you’d like the help, I suspect there may be at least a few people here who, given their familiarity with a given religion, may have a decent idea of how to pitch the focus on effectiveness to a specific group.
Are there any first person pieces on someone about successfully changing careers in order to earn to give? There have been several stories discussing the topic over the past few years but these all seem to be descriptive, third person accounts, or normative analysis.
Even if not, if you’ve actually made such a change could you please publicly share your story. I’d like to hear it and I’d bet many others would too.
To answer myself: turns out at least for iBooks the problem was my impatience. It’s now in the library and it’s still a week before it is officially released. Perhaps Kindle will be the same way.
Still, I so rarely anticipate books being released I’m not sure if this is common.
In navel-gazing curiosity: Has there been a poll done on what EAs think about moral realism?
I searched the Facebook group and Googled a bit but didn’t come up with anything.
Has anyone else tried to pushing EA specifically at religious audiences? There’s this on .impact but it’s been a while since that was touched and I’d guess this could use some follow up. Doing this could really prove beneficial at getting favorable audiences especially if you or someone you’re close to is heavily involved in a church.
If you’d like I can give a go at cleaning up the audio of Ord’s talk.
And by give a go I mean, run it through a few filters to see if it can go from “very bad” to “passable”.
I’m up to help do both of those. Of course, how much I can help with the former will depend on what exactly needs to be done.
A bit OT but this reminded me: Does anyone know if The Most Good You Can Do is coming out for Kindle?
I strongly prefer digital books, so buying it for Kindle would be the medium by which I can leave a verified purchase review on Amazon. However, the book doesn’t seem to available digitally anywhere in the U.S. iBooks is seemingly selling it for Australia only.
I’m pretty sure I’m grasping at proverbial straws here though.
Such personal incentives are important but, again, I didn’t advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is “truly” neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, “motivated selection”).