Pareto Fellowship was shut down? When? What happened?
Zeke_Sherman
We all know how many problems there are with reputation and status seeking. You would lower epistemic standards, cement power users, and make it harder for outsiders and newcomers to get any traction for their ideas.
If we do something like this it should be for very specific capabilities, like reliability, skill or knowledge in a particular domain, rather than generic reputation. That would make it more useful and avoid some of the problems.
I think we need more reading lists. There have already been one or two for AI safety, but I’ve not seen similar ones for poverty, animal welfare, social movements, or other topics.
This is odd. Personally my reaction is that I want to get to a project before other people do. Does bad research really make it harder to find good research? This doesn’t seem like a likely phenomenon to me.
Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed.
Only if you assume that there are high thresholds for achievements.
The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload.
I do not understand what you are saying.
Edit: do you mean, the option to get rid of technological developments and start from scratch? I don’t think there’s any likelihood of that, it runs directly counter to all the pressures described in my post.
Thanks for the comments.
Evolution doesn’t really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.
Evolution favors replication. But patience and resource acquisition aren’t obviously correlated with any sort of value; if anything, better resource-acquirers are destructive and competitive. The claim isn’t that evolution is intrinsically “against” any particular value, it’s that it’s extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic. Furthermore, competitive dynamics lead to systematic failures. See the citation.
Shulman’s post assumes that once somewhere is settled, it’s permanently inhabited by the same tribe. But I don’t buy that. Agents can still spread through violence or through mimicry (remember the quote on fifth-generation warfare).
It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it.
All I am saying is that the argument applies to this issue as well.
Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others’ situations, or to understand what we would believe if we viewed others’ private information.
The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict. Better technology yields better monitoring, but also better hiding—which is easier, monitoring ICBMs in the 1970′s or monitoring cyberweapons today?
One of the most critical pieces of information in these cases is intentions, which are easy to keep secret and will probably remain so for a long time.
By “don’t require superintelligence to be implemented,” do you mean systems of machine ethics that will work even while machines are broadly human level?
Yes, or even implementable in current systems.
I think the mandate of AI alignment easily covers the failure modes you have in mind here.
The failure modes here are a different context where the existing research is often less relevant or not relevant at all. Whatever you put under the umbrella of alignment, there is a difference between looking at a particular system with the assumption that it will rebuild the universe in accordance with its value function, and looking at how systems interact in varying numbers. If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn’t very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics. Figuring out how to optimize large systems of agents is absolutely not a simple matter of figuring out how to build one good agent and then replicating it as much as possible.
given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds
This is wholly speculative. I’ve seen no evidence that consequentialists “feel bad” in any emotionally meaningful sense for having made donations to the wrong cause.
This is the same sort of effect people get from looking at this sort of advertising, but more subtle
Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would even be more subtle? Why can’t we control our feelings and not fall to pieces at the thought that we might have been responsible for injustice? The world sucks and when one person screws up, someone else is suffering and dying at the other end. Being cognizant of this is far more important than protecting feelings.
if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the “at least as good as OPP” slogan.
I think you ought to place a bit more faith in the ability of effective altruists to make rational decisions.
Who hurt you?
Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that’s not the reason that research is conducting right now.
Yes, but I mean they’re not trying to figure out how to do it safely and ethically. The ethics/safety worries are 90% focused around what we have today, and 10% focused on superintelligence.
Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?
Don’t think so. It’s too broad and speculative with ill-defined values. It just boils down to (a) whether my scenarios are more likely than the AI-Foom scenario, and (b) whether my scenarios are more neglected. There’s not many other factors that a complicated calculation could add.
I don’t think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with “well that research was silly anyway.”)
Parenthesis is probably true, e.g. most of MIRI’s traditional agenda. If agents don’t quickly gain decisive strategic advantages then you don’t have to get AI design right the first time; you can make many agents and weed out the bad ones. So the basic design desiderata are probably important, but it’s just not very useful to do research on them now. Not familiar enough with your line of work to comment on it, but just think about the degree to which a problem would no longer be a problem if you can build, test and interact with many prototype human-level and smarter-than-human agents.
Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which “multi-agent dynamics” do you think change the technical situation?
Aside from the ability to prototype as described above, there are the same dynamics which plague human society when multiple factions with good intentions end up fighting due to security concerns or tragedies of the commons, or when multiple agents with different priors interpret every new piece of evidence they see differently and so go down intractably separate paths of disagreement. FAI can solve all the problems of class, politics, economics, etc by telling everyone what to do, for better or for worse. But multiagent systems will only be stable with strong institutions, unless they have some other kind of cooperative architecture (such as universal agreement in value functions, in which case you now have the problem of controlling everybody’s AIs but without the benefit of having an FAI to rule the world). Building these institutions and cooperative structures may have to be done right the first time, since they are effectively singletons, and they may be less corrigible or require different kinds of mechanisms to ensure corrigibility. And the dynamics of multiagent systems means you cannot accurately predict the long term future merely based on value alignment, which you would (at least naively) be able to do with a single FAI.
If evolution isn’t optimizing for anything, then you are left with the agents’ optimization, which is precisely what we wanted.
Well it leads to agents which are optimal replicators in their given environments. That’s not (necessarily) what we want.
I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where “anyone who wants to destroy the world has the option,” as is the security dilemma, and so forth.)
That too!
The effective altruism subreddit is growing in traffic: https://i.imgur.com/3BSLlgC.png (August figures are 2.5k and 9.5k)
The EA Wikipedia page is not changing much in pageviews: https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&start=2015-07&end=2017-08&pages=Effective_altruism
You can find the stats by going to the right of the page in moderation tools and clicking “traffic stats”. They only go back a year though. Redditmetrics.com should show you subscriber counts from before that, but not activity.
Are you assuming that crimes committed by people in EA will be towards other people in EA? According to RAINN, 34% of the time the sex offender is a family member. And most EAs have social circles which mostly comprise people who are not in EA, I would think. (This is certainly the case if you take the whole Facebook group to be the EA movement.)
I think that for all intents and purposes we should just use the survey responses as the template for the size of the EA movement, because if someone is on Facebook but is not even involved enough that we can get them to take a survey then we generally have little hope of influencing their behavior, if they even are in EA.
This seems like a well researched post with accurate statistics, but you didn’t note that EA is demographically somewhat different from the rest of the population. According to (https://bjs.gov/content/pub/pdf/SOO.PDF), 58% of American sexual assault offenders are white (this includes Hispanics), 40% are black, and 2% are “other”. Meanwhile the EA survey (http://effective-altruism.com/ea/1ex/demographics_ii/) showed that 89% of EAs identify as non-Hispanic white, 3.3% identify as Hispanic, 0.7% identify as black, and 7% identify as Asian (i.e. other). These stats are quite different from the base rate for the US, in a way that suggests the base rate of offenders in EA is lower than it is for the general population.
The 7.2 rapes per offender figure seems like it comes from a survey of paraphiliacs? Lisak and Miller say it is 4 rapes per offender. Maybe that is just because college students are younger.
Encourage or host dry events and parties.
I think that should be an obvious thing to do. Alcohol already costs money and reduces the intellectual caliber of conversation, we are better off without it.
The second point is irrelevant—what statistic is changed by the prevalence of false rape accusations? The Lisak and Miller study cited for the 6% figure do a survey of self-reports among men on campus.
Yes, I saw that part. But first, just because there are lots of unknown factors doesn’t mean we should ignore the ones that we do know. Suppose we’re too busy to look at anything besides demographics, that’s fine, but it doesn’t mean that we should deliberately ignore the information that we have about demographics. We’ll have an inaccurate estimate, but it’s still less inaccurate than the estimate we had before. If you don’t/didn’t have time to originally do this adjustment, that’s fine, like I said you already did a lot of work getting a good statistical foundation here. But we have more information so let’s update accordingly.
Now the statistics could be incorrect because of different rates of conviction or indictment or something of the sort. Sure, that is a different possibility, and if we have any suspicions about it then we can make some guesses in order to facilitate a better overall estimate. I would assume, from the outset, uniform priors for conviction rates. Maybe whites are under-represented due to bias in the system, or maybe they are over-represented due to the subcultures in which they live and the social independence or access to legal/judicial resources of their victims.
What are the facts? Sexual offense victims report (https://www.bjs.gov/content/pub/pdf/fvsv9410.pdf) that 57% of offenders are white, exactly in line with my other source. Only 27% report the offender as black, which is significantly less than my other source suggests though of comparatively little consequence for EA going by statistical averages. 6% say other and 8% say unknown.
In this case you are right that it seems like there was a disparity, blacks are apparently convicted disproportionately. But here at least we have an apparently more reliable source of perpetrator demographics and it says roughly the same thing about what EA base rates would be relative to that of the broader population.
I did not see that note. But for the calculations on the productivity impact, it seemed like one might read it with the assumption that the 80,000 hours in a career are EA career hours. If we don’t have enough information to make an estimate on this proportion, that’s fine, but it definitely doesn’t mean that we should implicitly treat it as if it is 100%; after all it is certainly less than that. What I read of the calculations just didn’t make it clear, so I wanted to clarify.
Of course that would be suboptimal, hundreds of hours calculating base rates would certainly not be worthwhile. I’m not offering to do it and I’m not demanding that anyone do it. Hundreds of hours directly studying EA would surely be more worthwhile, I agree on that. All I’m saying is that this information we have now is better than that information which we had an hour ago.
Are most acts of sexual violence committed by a select particularly egregious few or by the presumably more common ‘casual rapist’? Answering this question is relevant for picking the strategies to focus on.
Lisak and Miller (link repeated for convenience: http://www.davidlisak.com/wp-content/uploads/pdf/RepeatRapeinUndetectedRapists.pdf) give decent data on the distribution. 91% of rapes/attempted rapes are from repeat offenders.
Has anyone thought about retiring in a foreign country where the cost of living is low? That seems like a great idea to me—all the benefits of saving money, without worrying about work opportunities.