That sounds like it would be helpful, but I would also want people to have a healthier relationship with having an impact and with intelligence than I see some EAs having. It’s also okay to not be the type of person who would be good at the types of jobs that EAs currently think are most important or would be most important for “saving the world”. There’s more to life than that.
Quadratic Reciprocity
I’m also curious about the answer to this question. For people I know in that category (which disincludes anyone who just stopped engaging with AI safety or EA entirely), many are working as software engineers or are on short-term grants to skill up. I’d expect more of them to do ML engineering if there were more jobs in that relative to more general software engineering. A couple of people I know after getting rejected from AI safety-relevant jobs or opportunities have also made the decision to do master’s degrees or PhDs with the expectation that that might help, which is an option that’s more available to people who are younger.
I will probably be publishing a post on my best guesses for how public discourse and interest in AI existential risk over the past few months should update EA’s priorities: what things seem less useful now, what things seem more useful, what things were surprising to me about the recent public interest that I suspect are also surprising to others. I will be writing this post as an EA and AI safety random, with the expectation that others who are more knowledgeable will tell me where they think I’m wrong.
I mostly haven’t been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn’t a problem, the effective altruism movement seems lackluster in some ways.
I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn’t contribute to making them as ambitious or interested in exploring things outside “conventional EA” as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.
In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-”EA” sources to figure out what more effective interventions GiveWell might be missing perhaps because they’re somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer’s sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.
And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers.
I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.
I think people new to EA not knowing a lot about specific cause areas they’re excited about isn’t more true for AI x-risk than other cause areas. For example, I suspect if you asked animal welfare or global health enthusiasts who are as new as the folks into AI safety you talked to about the key assumptions relating to different animal welfare or global health interventions, you’d get similar results. It just seems to matter more for AI x-risk though since having an impact there relies more strongly on having better models.
+1, also interested
Thanks! Does that depend on the empirical question of how costly it would be for the AI to protect us and how much the aliens care about us or is the first number too small that there’s almost always going to be someone willing to trade?
I imagine the civilisations that care about intelligent life far away have lots of others they’d want to pay to protect. Also unsure about what form their “protect Earth life” preference takes—if it is conservationist style “preserve Earth in its current form forever” then that also sounds bad because I think Earth right now might be net negative due to animal suffering. Though hopefully there not being sentient beings that suffer is a common enough preference in the universe. And that there are more aliens who would make reasonable-to-us tradeoffs with suffering such that we don’t end up dying due to particularly suffering focused aliens.
I don’t think it makes any arguments? I also expect less to be convinced that factory-farmed animals have net positive lives, that wild animals might seems easier to defend
Ooh that sounds interesting, it was cool to see Matthew argue for his position in this Twitter thread https://twitter.com/MatthewJBar/status/1643775707313741824
I think “digital minds can’t be conscious” is an uncommon position among EAs
My guess is @RobBensinger would probably hold that view, based on https://www.lesswrong.com/posts/b7Euvy3RCKT7cppDk/animal-welfare-ea-and-personal-dietary-options and it would be fun to see him debate this though unlikely he’d choose to.
why does ECL mean a misaligned AGI would care enough about humans to keep them alive? Because there are others in the universe who care a tiny bit about humans even if humans weren’t smart enough to build an aligned AGI? or something else?
Thank you for sharing your thoughts here.
I found it really difficult to reply to this comment, partly because it is difficult for me to inhabit the mindset of trying to be a representative for EA. When I talk to people about EA, including when I was talking to students who might be interested in joining an EA student group, it is more similar to “I like EA because X, the coolest thing about EA for me is Y, I think Z though other people in EA disagree a bunch with my views on Z for W reason and are more into V instead” rather than trying to give an objective perspective on EA.
I’m just really wary of changing the things I say until it gets people to do the thing I want (sign up for my student group, care about AI safety, etc.) There are some situations when that might be warranted like if you’re doing some policy-related thing. However, when running a student group and trying to get people who are really smart and good at thinking, it seems like the thing I’d want to do is just to state what I believe and why I believe it (even and especially if my reasons sound dumb) and then hearing where the other person agrees or disagrees with me. I don’t want to state arguments for EA or AI safety to new members again and again in different ways until they get on board with all of it, I want us to collaboratively figure things out.
Some things I got wrong in the past:
In the past, I think I cared excessively about EA and me myself seeming respectable, and I think I was wrong about the tradeoffs there. As one concrete example of this, when talking to people about AI safety I avoided linking a bunch to blog posts even when I thought they were more useful to read and instead sent people links to more legitimate-seeming academic papers and people because I thought that made the field seem more credible. I think this and other similar things I did were bad.
I didn’t care about people’s character, and how much integrity they had enough in the past. I was very forgiving when I found out someone had intentionally broken a small promise to a colleague or acted in a manipulative way towards someone because in those cases, it seemed to me like the actual magnitude of the harm caused ended up being small relative to the impact of the person’s work. I now think those small harms add up and could be quite costly by adding mistrust and friction to interactions with others in the community.
I dismissed AI x-risk concerns in 2019 without making an honest attempt to learn about the arguments because it sounded weird. I think that was a reasonable thing to do given the social environment I was in. I think the really big mistake I made there though was not stating why I was unconvinced to my friends who had thought about it because I was afraid of seeming dumb due to not having read all the arguments already and because I was afraid of feeling pressured if I did try to argue about it.
I think I thought university EA group organising was the most useful thing for me to do. This seemed sensible to believe for a while but I think I stuck with it for too long because I had signed up to do it. If I had been honestly looking for evidence of the usefulness of the group organising activities I was doing, I would have realised a lot quicker that it was more useful for me to stop and do other things instead. I think this cost me ~100 hours.
I took economics courses during my degree and I don’t think they were particularly helpful for pursuing impactful paths (not that they were unhelpful, it’s just that if I really needed to know the content for some reason, I could have picked it up elsewhere). This is true for all my courses in general.
Unless I am missing something, the main reason to insist on taking more econ classes would be if you want to pursue a further degree such as a master’s degree in something econ related. Or if you know you are going to learn econ anyway and doing a course in it instead of something else less directly relevant and learning econ on the side would save time. If you don’t feel pretty motivated to learn econ anyway, I don’t think the econ thing should be a strong consideration in favour of MORSE.
Are there particular modules you think are really useful you’d only be able to take on MORSE that you wouldn’t if you were doing maths&stats at oxford?
A few months ago I felt like some people I knew within community building were doing a thing where they believed (or believed they believed) that AI existential risk was a really big problem but instead of just saying that to people (eg: new group members), they said it was too weird to just say that outright and so you had to make people go through less “weird” things like content about global health and development and animal welfare before telling them you were really concerned about this AI thing.
And even when you got to the AI topic, had to make people trust you enough by talking about misuse risks first in order to be more convincing. This would have been an okay thing to do if those were their actual beliefs. But in a couple of cases, this was an intentional thing to warm people up to the “crazy” idea that AI existential risk is a big problem.
This bothered me.
To the extent that those people now feel more comfortable directly stating their actual beliefs, this feels like a good thing to me. But I’m also worried that people still won’t just directly state their beliefs and instead still continue to play persuasion games with new people but about different things.Eg: one way this could go wrong is group organisers try to make it seem to new people like they’re more confident about what interventions within AI safety are helpful than they actually are. Things like: “Oh hey you’re concerned about this problem, here are impactful things you can do right away such as applying to this org or going through this curriculum” when they are much more uncertain (or should be?) about how useful the work done by the org is or how correct/relevant the content in the AI safety curriculum is.
There are different ways to approach telling people about effective altruism (or caring about the future of humanity or AI safety etc):
“We want to work on solving these important problems. If you care about similar things, let’s work together!”
“We have figured out what the correct things to do are and now we are going to tell you what to do with your life”
It seems like a lot of EA university group organisers are doing the second thing, and to me, this feels weird and bad. A lot of our disagreement about specific things, like how I feel it is icky to use prepared speeches written by someone else to introduce people to EA and bad to think of people who engage with your group in terms of where they are in some sort of pipeline, is about them thinking about things in that second frame.
I think the first framing is a lot healthier, both for communities and for individuals who are doing activities under the category of “community building”. If you care deeply about something (eg: using spreadsheets to decide where to donate, forming accurate beliefs, reducing the risk we all die due to AI, solving moral philosophy, etc) and you tell people why you care and they’re not interested, you can just move along and try to find people who are interested in working together with you in solving those problems. You don’t have to make them go through some sort of pipeline where you start with the most appealing concepts to build them up to the thing you actually want them to care about.
It is also healthier for your own thinking because putting yourself in the mindset of trying to persuade others, in my experience, is pretty harmful. When I have been in that mode in the past, it crushed my ability to notice when I was confused.
I also have other intuitions for why doing the second thing just doesn’t work if you want to get highly capable individuals who will actually solve the biggest problems but in this comment, I just wanted to point out the distinction between the two ways of doing things. I think they are distinct mindsets that lead to very different actions.
What changed your mind about vitamin D?
otoh I’ve found watching “famous” people debate helpful for taking them off the pedestal. It’s demystifying to watch them argue things and think about things in public rather than just reading their more polished thoughts. The former almost always makes impressive-looking people seem less impressive.