What’s Good Growth?
Yeah, EA is likely less compelling when this is defined as feeling motivating/interesting to the average person at the moment, although it is hard to judge since EA hasn’t been around for anywhere near as long. Nonetheless, many of the issues EAs care about seem way too weird for the average person, then again if you look at feminism, a lot of the ideas were only ever present in an overly academic form. Part of the reason why they are so influential now is that they have filtered down into the general population in a simpler form (such as “girl power”, “feeling good, rationality bad”). Plus social justice is more likely to benefit the people supporting it in the here and now than EA which focuses more on other countries, other species and other times which is always a tough sell.
SJ is an extremely inclusive movement (basically by definition)
I’m generally wary of argument by definition. Indeed, SJ is very inclusive to members of a racial minority or those who are LGBTI, but is very much not when it comes to ideological diversity. And some strands can be very unwelcoming to members of majorities. So it’s much more complex than that.
“There are definitely many who see these more in the movement/tribe sense”—For modern social justice this tends to focus on who is a good or bad person, while for EA this tends to focus more on who to trust. (There’s a less dominant strand of thought within social justice that says we shouldn’t blame individuals for systematic issues, but it’s relatively rare). EA makes some efforts towards being anti-tribal, while social justice is less worried about the downsides of being tribal.
Greater knowledge of psychology would be powerful, but why should we expect the sign to be positive, instead of say making the world worse by improving propaganda and marketing?
Why is Leverage working on psychology? What is it hoping to accomplish?
This seems like a good idea and definitely the thing I’d consider once I learn enough about ai that this would be valuable for others.
“It’s not clear that advanced artificial intelligence is going to arrive any time within the next several decades”—On the other hand, it’s seems, at least to me, most likely that it will. Even if several more breakthroughs would be required to reach general intelligence, those may still come relatively fast as deep learning has now finally become useful enough in a wide enough array of applications that there is far more money and talent in the field than there ever was before by orders of magnitude. Now this by itself wouldn’t necessarily guarantee fast advancement in a field, but AI research is still the kind of area where a single individual can push the research forward significantly just by themselves. And governments are beginning to realise the strategic importance of AI, so even more resources are flooding the field.
“One of the top AI safety organizations, MIRI, has now gone private so now we can’t even inspect whether they are doing useful work.”—this is not an unreasonable choice and we have their past record to go on. Nonetheless, there are more open options if this is important to you.
“Productive AI safety research work is inaccessible to over 99.9% of the population, making this advice almost useless to nearly everyone reading the article.”—Not necessarily. Even if becoming good enough to be a researcher is very hard, it probably isn’t nearly as hard to become good enough at a particular area to help mentor other people.
I’m definitely in favour of this kind of project since I feel more EAs should be experimenting with small projects.
“The situation seems pretty symmetric, though: if a politician builds roads just to get votes, and an NGO steps in and does something valuable with that, the politician’s counterfactual impact is still the same as the NGO’s”—true, but the NGO’s counterfactual impact is reduced when I feel it’s fairer for the NGO to be able to claim the full amount (though of course you’d never know the government’s true motivations in real life)
The order indifference of Shapely values only makes sense from a perspective where there is perfect knowledge of what other players will do, but if you don’t have that, a party that spent a huge amount of money on a project that was almost certainly going to be wasteful and ended up being saved when by sheer happenstance another party appeared to save the project was not making good spending decisions. Similarly, many agents won’t be optimising for Shapely value, say a government which spends money on infrastructure not caring about whether it’ll be used or not just to win political points, so they don’t properly deserve a share of the gains when someone else intervenes with notifications to make the project actually effective.
I feel that this article presents Shapley value as just plain superior, when instead a combination of both Shapley value and counterfactual value will likely be a better metric. Beyond this, what you really want to use is something more like FDT where you take into account the fact that the decisions of some agents are subjunctively linked to you and that the decisions of some other agents aren’t. Even though my current theory is that very, very few agents are actually subjunctively linked to you, I suspect that thinking about problems in this fashion is likely to work reasonably well in practise (I would need to dedicate a solid couple of hours in order to be able to write out my reasons for believing this more concretely)
If we run any more anonymous surveys, we should encourage people to pause and consider whether they are contributing productively or just venting. I’d still be in favour of sharing all the responses, but I have enough faith in my fellow EAs to believe that some would take this to heart.
I’m most concerned about attempts to politicise the movement as unlike most of the other risks, this risk is adversarial. EA has to thread the needle of operating and maintaining our reputation in a politicised environment without letting this distort our way of thinking.
I suspect that it could be impactful to study say a masters of AI or computer science even if you don’t really need it. University provides one of the best opportunities to meet and deeply connect with people in a particular field and I’d be surprised if you couldn’t persuade at least a couple of people of the importance of AI safety without really trying. On the other hand, if you went in with the intention of networking as much as possible, I think you could have much more success.
Interesting reading your strategy, particularly what you aren’t focusing on. The one part I’d be somewhat skeptical of is decreasing upskilling. People, particularly the people that we want to join our community, want to grow and improve. It’s important to be realistic about how much someone can upskill in a limited amount of time, but these kinds of events seem like a key draw.
One of the vague ideas spinning around in my head is that maybe in addition to EA which is a fairly open, loosely co-ordinated, big-tent movement with several different cause areas, there would also in value in a more selective, tightly co-ordinated, narrow movement focusing just on the long term future. Interestingly, this would be an accurate description of some EA orgs, with the key difference being that these orgs tend to rely on paid staff rather than volunteers. I don’t have a solid idea of how this would work, but just thought I’d put this out there...
That is pretty concerning. I would love an explanation of this as well!
I’m strongly in favour of creating a fellowship with a fancy name and website in order to allow people to build career capital; or at least make accepting these fellowships not a step backwards. EA Grant doesn’t exactly sound prestigious.