Thanks! Makes sense.
oh54321
A few more thoughts:
(EDIT: Seems like in an above comment that current culture-steering is trying to encourage perspective and stuff, so what I’m irked about here isn’t really an issue). Limiting EA globals to only people that fit into certain cultures could severely limit perspective/diversity, and in bad cases could make EAs as “cult”-y as some people want to believe it is. I think this is probably only true for some misguided attempts to steer culture, and in fact you can steer culture through admissions to encourage things like diversity/perspective. Selecting attendees based on “people Oliver Habryka would hang out with outside the conference” does seem to be sort of narrow, but I’m guessing this wasn’t what you actually want admissions to look like?
In respone to “I would like to increase the number of people who don’t need to apply to attend because they are part of some group or have some obvious signal that means they should pass the bar”: I guess it depends what you mean here, but I agree with Zach in that it doesn’t seem obvious why this would actually help (although I’d be intersted in the specific details of what you mean). I mean, some ways of doing this could make people upset, since the outside perspective could be something like certain “inner circles” of EA have easy backdoor access to EAGs. This could also make even very impressive/high-impact EAs feel excluded if they’re not a part of said groups. Seems to me like there are tons of ways this could go wrong.
- Sep 4, 2022, 2:29 PM; 7 points) 's comment on Open EA Global by (
I strong upvoted since this makes sense to me, but is EA global admissions actually making an effort to steer culture in any way? I was under the impression that EAG admissions was mostly just based on engagement with EA and how impressive someone is, which doesn’t really seem to qualify as steering culture. This certainly seems to line up with some things said in this post about people being rejected for not being impressive.
Oh right, I guess this would have been easy to find if I didn’t skip straight to the report.
There is a lot of text. Within the 62 page documents there are further links to literature reviews, interviews, etc.
Could someone who has read through all of this (or did the research) give me a few of the most convincing empirical tests/findings that support protests being effective? I kind of want to see more numbers.
This post irked me because I think it’s wrong if you raise the bar for what “Ivy-smart” means. If it means something like “being good at an academic thing on a national (or even state) level” then I think most of these people are at Ivy League schools. For example, I know ~40 people who went to Olympiad training camps or were finalists in research contests, and pretty much all of them other than one or two are at top 10 schools. As another example, one competitive math research program list its results for alumni:
This research program I think is considered around as tough to get into as making the national math Olympiad.
I’m estimating there are maybe 5,000 − 10,000 people who would qualify as being good at something on the national level per four year groups (how I’m getting this estimate—good at national level = top 500 in something, there are maybe 10ish academic areas, and each of this has 1-2 ways to be good. For example, for sciences there are olympiads and research, for CS there’s CS research and building impressive projects, etc. ) Meanwhile, after a quick google search there are about 24,000 who would get a 150+ SAT score. It seems to me that there are a lot of people who are great at something on a national level that don’t know what EA is. Maybe enough that we should focus on Ivies significantly more than anywhere else?
As others have said, this really depends on what the context is. If we’re choosing how much money to spend on which University groups, and all University groups have around equal need for money, I’m guessing that the vast majority of the money should go towards Ivy University groups.
Probably something like how much effort should be spent on building EA groups at said University. I agree with the examples Linch gives for where this wouldn’t be relevant.
I don’t think the relevant value here is P(goes to school | smart person). I think it’s P(smart person | goes to school). The latter seems much higher for Ivies (Ivies = top ranked schools) than anywhere else, except places that excel in a certain area (e.g., CMU, although this might also be top ranked).
Thanks! Makes sense.
I interpreted this post as also complaining about there not being any sort of consesus among EAs as a community for how to deal with negative press. Most of what you listed would not qualify as a community-wide strategy. Sure, Open Phil, CEA, 80k, etc. having some PR guidance seems awesome, but what about everyone else? I’m guessing there are many orgs, individual people who have a lot of twitter followers or whatever, etc. that don’t know how to respond to negative press.
There’s a communications strategy being drafted.
seems to be the only thing you listed that actually addresses this. Which is great! But it seems pretty unrealistic to expect OP to know that this was happening. I’m also guessing this wouldn’t have been answered in an EA forum question; questions don’t seem to gain that much traction on EA forum, and the vast majority of users probably had no clue a communications strategy was being drafted.
I’m not sure I follow why conservative criticism would lead to expanding the movement to a less represented audience
Although to be clear, I do think it’s probably correct that this tends to happen more with academic work.
“Academic work should get less status than concrete projects.”
This seems like a pretty strong claim and I’m not sure I agree with it. While yeah, sure, I’m guessing people working on theory sometimes end up doing too much of what’s interesting and not enough of what’s the most useful, I’m guessing projects are also mostly things that people find interesting but might not be useful .
Theory = not useful, projects = useful doesn’t seem to exactly hit what I think the problem is. I’m guessing theory researchers specifically gravitate towards the bits of theory that seem the most interesting, projects people gravitate towards the most flashy/interesting projects, and these are the problems.
Yeah you’re right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like “EA opinions are influenced by other EAs more than I’d like them to be”. Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didn’t mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. I’ll change the original comment to reflect this.
tl;dr, I think deference is more concerning for EA than other cultures. Relative to how much we should expect EAs to defer, they defer way too much.
1) We should expect EA to have much less deference culture than other cultures, since a lot of EA claims are based on things like answers to philosophical questions, long term future predictions, ect. These kinds of things are really hard to answer, and I don’t think it’s the case that most experts have a much better shot at answering these than some relatively smart and quantitative University students. Questions about moral philosophy are the exact kinds of questions you expect to have a super wide range of answers to, so the number of EAs that claim they’re longtermist is kind of surprising and unexpected. I think this is a sign there’s more deference than their should be.
On the other hand, for more concrete and established scientific fields where experts do have a much better chance at making decisions than students, it makes way more sense to defer to them about what things are important.
2) EAs are optimizing for altruism, so decisions on what to work on require lots of thought. I’m guessing most non-EA people choose to work on things they enjoy or are emotionally invested in.
I can easily tell you, without any evidence or deference, what things I thing are fun and am emotionally invested in. But it takes a lot more time and research to come up with what I think is the most impactful.
I think EAs having more evidence and reasoning to back up what we’re working on just naturally arises from being an EA, and doesn’t necessarily mean we have better epistemics than other communities.
3) Explicitly saying when you’re deferring to someone seems like it does a better job of convincing people “wow! these EA people seem more correct than most other communities” and does a worse job at actually being more correct than most other communities. Being explicit about when we defer to people still means we might defer way too much.
4) Edit: I think this point is not actually about deference. Also, I know very little about MIRI and have no idea if this is in any way realistic. I’m guessing you could replace MIRI with some other org and this kind of story would be true, but I’m not totally sure.
Also, idk I feel like some things that look like original, detailed thinking actually just ends up being closer to deference than I’d like. I think perhaps a story that’s happened before is “MIRI researcher thinks hard about AI stuff, and comes up with some original thoughts with lots of evidence. Writes on alignment form. Tons of karma, yay.”
Sure, the thinking is original, has evidence to back it up, and looks really nice, pretty, and useful. That being said, even if this is original thinking, I’m guessing if you looked at how this person was using the opinions of other people to shape their own opinions, it would look like
Talking to other MIRI people − 80%
Talking to non-MIRI EAs − 10%
Reading books/opinions written by non-EAs relevant to what they’re working on − 5%
Talking to non-EAs − 5%
So even if this thinking looks really original and intelligent, this still seems like a problem with deference. Not deferring to other MIRI researchers an unhealthy amount probably looks more like getting more insight from mainstream academia and non-EAs.
I guess the point here is that it’s much easier to look like you’re not deferring to people too much than to actually not defer to people too much.
5) I think people in general defer way too much and do not think hard enough about what to work on. I think EAs defer too much and occassionally don’t think hard enough about what to work on. Being better than the latter doesn’t really mean I’m satisfied with the former.
Yep this is also what I’m looking for, thanks!
This is excellent, thank you.
[Question] Where can I find good criticisms of EA made by non-EAs?
Yeah, this makes sense. That being said, I’m guessing while some people in theory are trying to maximize the “good” they accomplish, in practice it’s easy to forget about options that aren’t easily traceable. My point was also that it’s worth explicitly putting in effort to look for these kinds of options.
By options, I mean something like giving a research project to a more capable person. I’m guessing some people wouldn’t consider that this is a thing they can do.
Which chunks of people?