It’s also very much worth reading the linked pdf, which goes into more detail than the fact sheet.
TW123
Except, perhaps, dictators and other ne’er-do-wells.
I would guess that a significant number of power-seeking people in history and the present are power-seeking precisely because they think that those they are vying for power with are some form of “ne’er-do-wells.” So the original statement:
Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.
with the footnote doesn’t seem to mean very much. “Everyone, except those viewed as irresponsible,” historically, at least, has certainly not meant everyone, and to some people means very few people.
There is for the ML safety component only. It’s very different from this program in time commitment (much lower), stipend (much lower), and prerequisites (much higher, requires prior ML knowledge) though. There are a lot of online courses that just teach ML, so you could take one of those on your own and then this.
Sure, here they are! Also linked at the top now.
No, it is not being run again this year, sorry!
I have collected existing examples of this broad class of things on ai-improving-ai.safe.ai.
(More of a meta point somewhat responding to some other comments.)
It currently seems unlikely there will be a unified AI risk public communication strategy. AI risk is an issue that affects everyone, and many people are going to weigh in on it. That includes both people who are regulars on this forum and people who have never heard of it.
I imagine many people will not be moved by Yudkowsky’s op ed, and others will be. People who think AI x-risk is an important issue but who still disagree with Yudkowsky will have their own public writing that may be partially contradictory. Of course people should continue to talk to each other about their views, in public and in private, but I don’t expect that to produce “message discipline” (nor should it).
The number of people concerned about AI x-risk is going to get large enough (and arguably already is) that credibility will become highly unevenly distributed among those concerned about AI risk. Some people may think that Yudkowsky lacks credibility, or that his op ed damages it, but that needn’t damage the credibility of everyone who is concerned about the risks. Back when there were only a few major news articles on the subject, that might have been more true, but it’s not anymore. Now everyone from Geoffrey Hinton to Gary Marcus (somehow) to Elon Musk to Yuval Noah Harari are talking about the risks. While it’s possible everyone could be lumped together as “the AI x-risk people,” at this point, I think that’s a diminishing possibility.
There is often a clash between “alignment” and “capabilities” with some saying AI labs are pretending to do alignment while doing capabilities and others say they are so closely tied it’s impossible to do good alignment research without producing capability gains.
I’m not sure this discussion will be resolved anytime soon. But I think it’s often misdirected.
I think often what people are wondering is roughly “is x a good person for doing this research?” Should it count as beneficial EA-flavored research, or is it just you being an employee at a corporate AI lab? The alignment and capabilities discussions often seem secondary to this.
Instead, think we should stick to a different notion: something is “pro-social” (not attached to the term) AI x-risk research if it’s research that (1) has a shot of reducing x-risk from AI (rather than increasing it or doing nothing) and (2) is not incentivized enough by factors external to the lab, to pro-social motivation, and to EA (for example: the market, the government, the public, social status in silicon valley, etc.)
Note (1) should include risks that the intervention changes timelines in some negative way, and (2) does not mean the intervention isn’t incentivized at all, just that it isn’t incentivized enough.
This is actually similar enough to the scale/tractability/neglectedness framework but it (1) incorporates downside risk and (2) doesn’t run into the problem of having EAs want to do things “nobody else is doing” (including other EAs). EAs should simply do things that are underincentivized and good.
So, instead of asking things like, “is OpenAI’s alignment research real alignment?” ask “how likely is it to reduce x-risk?” and “is it incentivized enough by external factors?” That should be how we assess whether to praise the people there or tell people they should go work there.
Thoughts?
Note: edited “external to EA” to “external to pro-social motivation and to EA”
I expect that if plant based alternatives ever were to become as available, tasty, and cheap as animal products, a large proportion of people and likely nearly all EAs would become vegan. Cultural effects do matter, but in the end I expect them to be mostly downstream of technology in this particular case. Moral appeals have unfortunately had limited success on this issue.
Thanks for sharing your perspective, it’s useful to hear!
I don’t think the orthogonality thesis is correct in practice, and moral antirealism certainly isn’t an agreed upon position among moral philosophers, but I agree that point 17 seems far fetched.
This post lines up with my outsider perspective on FHI, and it seems to be quite measured. I encourage anyone who thinks that Bostrom is really the best leader for FHI to defend that view here (anonymously, if necessary).
We should also celebrate the politicians and civil servants at the European Commission and EU Food Agency for doing the right thing. Regardless of who may have talked to them, it was ultimately up to them, and so far they’ve made the right choices.
A suggestion I’m throwing out just for consideration: maybe create a specific section on the frontpage for statements from organizations. I don’t think there are that many organizations that want to make statements on the EA forum, but they usually seem pretty worth reading for people here. (Often: something bad happened, and here’s our official stance/explanation).
A downside could be that this means organizations can be more visible than individuals about community matters. That seems possibly bad (though also how it usually works in the broader world). But it seems worse for the forum moderators to arbitrarily decide if something is important enough to be displayed somehow.
Agreed. My sense is that much of the discomfort comes from the tendancy that people have to want to have their career paths validated by a central authority. But that isn’t the point of 80k. The point of 80k is to direct people towards whatever they think is most impactful. Currently that appears to be mostly x-risk.
If you meet some of the people at places like 80k and so forth, I think it’s easier to realize that they are just people who have opinions and failings like anyone else. They put a lot of work into making career advising materials, and they might put out materials that say that what you are doing is “suboptimal.” If they are right and what you’re doing really is clearly suboptimal, then maybe you should feel bad (or not; depends on how much you want to feel bad about not maximizing your altruistic impact) . But maybe 80k is wrong! If so, you shouldn’t feel bad just because some people who happen to work at 80k made the wrong recommendation.
Yes, I think it’s impossible not to have norms about personal relationships (or really, anything socially important). I should perhaps have provided an example of this. Here is one:
If you move to a new place with a lot of EAs, you will likely at some point be asked if you want to live in a large group house with other EAs. These group houses are a norm, and a lot of people live in them. This is not a norm outside of EA (though it is in maybe some other communities), so it’s certainly a positive norm that has been created.
Even if EAs tended to live overwhelmingly in smaller houses, or lived with people who weren’t EAs, then that would just be another norm. So I really don’t think there is a way to escape norms.
I appreciate this post. But one mistake this post makes, which I think is an extremely common mistake, is assuming that there can exist a community without (soft) norms.
Every community has norms. It is impossible to get out of having norms. And so I don’t think we should be averse to trying to consciously choose them.
For example, in American society it is a norm to eat meat. Sometimes this is in fact because people actively are trying to get you to eat meat. But mostly, nobody is telling other people what to eat -- people are allowed to exercise their free choice (though animals aren’t). But this norm, while freeing for some, is constricting for others. If I go to a restaurant in many places, there won’t be much good vegetarian food. In some places, there is a norm to have vegetarian food. But there is no place where there is no norm: in some places, there is a norm to have it, and in others, there is a norm not to have it. The norms can be stronger or weaker but there is no place without norms.
Currently, there is the non-coercive but “soft” norm in EA that young people interested in AI safety research will go to Berkeley. The post you link is an example of that. People are being actively encouraged to go to Berkeley. They are being paid specifically to go to Berkeley in some cases. For the reasons you give, this could potentially be really good, but the comments on that post also give reasons why it might not be!
You gave the following reason why norms are often not so good:
they prevent people from doing harmless things that they want to do.
This is true. But one could just as easily say of other norms:
they encourage people to do slightly harmful things they wouldn’t otherwise want to do.
Or perhaps:
they fail to discourage people from doing harmful things that they want to do.
The “default norm” is what the community happened to settle on. But it is a norm as much as any other. And it isn’t necessarily the best one.
I certainly didn’t mean to imply that if you don’t have one of those bullet points, you are going to be “blacklisted” or negatively affected as a result of speaking your mind. They just seemed like contributing factors for me, based on my experience. And yeah, I agree different people evaluate differently.
Thanks for sharing your perspective.
There is this, but I agree it would be good if there was one that were substantially more detailed in describing the process.
(You are probably getting downvotes because you brought up polyamory without being very specific about describing exactly how you think it relates to why Open Phil should have a public COI policy. People are sensitive about the topic, because it personally relates to them and is sometimes conflated with things it shouldn’t be conflated with. Regardless, it doesn’t seem relevant to your actual point, which is just that there should be a public document.)
People have been having similar thoughts to yours for many years, including myself. Navigating through EA epistemic currents is treacherous. To be sure, so is navigating epistemic currents in lots of other environments, including the “default” environment for most people. But EA is sometimes presented as being “neutral” in certain ways, so it feels jarring to see that it is clearly not.
Nearly everyone I know who has been around EA long enough to do things like run a university group eventually confronts the fact that their beliefs have been shaped socially by the community in ways that are hard to understand, including by people paid to shape your beliefs. It’s challenging to know what to do in light of that. Some people reject EA. Others, like you, take breaks to figure things out more for themselves. And others press on, while trying to course correct some. Many try to create more emotional distance, regardless of what they do. There’s not really an obvious answer, and I don’t feel I’ve figured it fully out myself. All this is to just say: you’re not alone. If you or anyone else reading this wants to talk, I’m here.
Finally, I really like this related post, as well as this comment on it. When I ran the Yale EA in depth fellowship, I assigned it as a reading.
Sorry not to weigh in on the object-level parts about university groups and what you think they should do differently, but as I’ve graduated I’m no longer a community builder so I’m somewhat less interested in weighing in on that.