Links to the application form don’t seem to work?
Quadratic Reciprocity
Another newsletter(?) that I quite like is Zvi’s
What disagreements do the LTFF fund managers tend to have with each other about what’s worth funding?
What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don’t already exist?
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs
When people do this, do you think they mostly want someone with more skills or knowledge or someone with better, more prestigious credentials?
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I would love to see people debate the question of how difficult AI alignment really is. This has been argued before in for example the MIRI conversations and other places but more content would still be helpful for people similar to me who are uncertain about the question. Also, at the EAG events I went to, it felt like there was more content by people who had more optimistic views on alignment so would be cool to see the other side.
otoh I’ve found watching “famous” people debate helpful for taking them off the pedestal. It’s demystifying to watch them argue things and think about things in public rather than just reading their more polished thoughts. The former almost always makes impressive-looking people seem less impressive.
That sounds like it would be helpful, but I would also want people to have a healthier relationship with having an impact and with intelligence than I see some EAs having. It’s also okay to not be the type of person who would be good at the types of jobs that EAs currently think are most important or would be most important for “saving the world”. There’s more to life than that.
I’m also curious about the answer to this question. For people I know in that category (which disincludes anyone who just stopped engaging with AI safety or EA entirely), many are working as software engineers or are on short-term grants to skill up. I’d expect more of them to do ML engineering if there were more jobs in that relative to more general software engineering. A couple of people I know after getting rejected from AI safety-relevant jobs or opportunities have also made the decision to do master’s degrees or PhDs with the expectation that that might help, which is an option that’s more available to people who are younger.
I will probably be publishing a post on my best guesses for how public discourse and interest in AI existential risk over the past few months should update EA’s priorities: what things seem less useful now, what things seem more useful, what things were surprising to me about the recent public interest that I suspect are also surprising to others. I will be writing this post as an EA and AI safety random, with the expectation that others who are more knowledgeable will tell me where they think I’m wrong.
I mostly haven’t been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn’t a problem, the effective altruism movement seems lackluster in some ways.
I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn’t contribute to making them as ambitious or interested in exploring things outside “conventional EA” as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.
In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-”EA” sources to figure out what more effective interventions GiveWell might be missing perhaps because they’re somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer’s sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.
And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers.
I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.
I think people new to EA not knowing a lot about specific cause areas they’re excited about isn’t more true for AI x-risk than other cause areas. For example, I suspect if you asked animal welfare or global health enthusiasts who are as new as the folks into AI safety you talked to about the key assumptions relating to different animal welfare or global health interventions, you’d get similar results. It just seems to matter more for AI x-risk though since having an impact there relies more strongly on having better models.
+1, also interested
Thanks! Does that depend on the empirical question of how costly it would be for the AI to protect us and how much the aliens care about us or is the first number too small that there’s almost always going to be someone willing to trade?
I imagine the civilisations that care about intelligent life far away have lots of others they’d want to pay to protect. Also unsure about what form their “protect Earth life” preference takes—if it is conservationist style “preserve Earth in its current form forever” then that also sounds bad because I think Earth right now might be net negative due to animal suffering. Though hopefully there not being sentient beings that suffer is a common enough preference in the universe. And that there are more aliens who would make reasonable-to-us tradeoffs with suffering such that we don’t end up dying due to particularly suffering focused aliens.
I don’t think it makes any arguments? I also expect less to be convinced that factory-farmed animals have net positive lives, that wild animals might seems easier to defend
Ooh that sounds interesting, it was cool to see Matthew argue for his position in this Twitter thread https://twitter.com/MatthewJBar/status/1643775707313741824
It would be cool to have someone with experience in startups who also knows a decent amount about EA because many insights from running a successful startup might apply to people working to ambitiously solve neglected and important problems. Maybe Patrick Collison?