Hi Jeff, thanks for posting about this—I feel that we clearly don’t talk enough about biorisks. Could you share how much background knowledge was necessary for you to do the switch? Was it an interest of yours as a hobby, or did you follow fellowships? I understand very much how your skills were highly transferrable, but I wonder about the background knowledge part since this is something many of my working professionals worry about when thinking about making a switch (since they have not been in academia before they worry that they won’t be good enough despite having similar skills to yours).
Vaipan
I appreciate posts that provide concise comparative overviews of complex concepts. I have some questions that may seem basic to some, but I’d love to receive answers nonetheless.
OpenPhil, DeepMind, and Meta are leading labs in AI development, both empirically (e.g., ChatGPT) and financially (in terms of resources). China is known for its ability to replicate existing research rather than creating it. Given the concerns about AI and AGI development, particularly the risks of extinction, why do these American and British labs continue their AI work without pausing? Is there external pressure from governments or other nations that might be hostile? I’m trying to understand if there are motivations beyond just capitalizing on AI’s current momentum, similar to some scientists during the development of the A-bomb who pursued it for personal fame and scientific curiosity while disregarding risks.
Additionally, although this may not directly relate to your post, have we considered that the emphasis on AI safety, while creating more jobs in that field, might actually stimulate AI growth and increase the risks of extinction? There’s a shared sentiment in the Effective Altruism (EA) community that more people are joining out of interest in AI (safety or otherwise), as it serves as a hub for discussions and funding related to AI. These newcomers might face a dilemma: Are they willing to work for the greater good, even if it means pausing AI development and potentially affecting their livelihoods? How committed are they to their values when it comes to reducing job opportunities and growth in their passionate field? I apologize if this isn’t the ideal platform for these discussions, but they are infrequently addressed in the forum, and I thought they might relate to the topic of talent in AI.
Edit : all I do is asking genuine questions and I’m being downvoted to hell. If you disagree with the usefulness of the questions tick the ‘I disagree’ box(and even that why do care that my question are being answered?), but downvoting me just screams ‘I refuse criticism on this topic and such questions should’nt be answered’. Which is not honest nor rational, and I’m quite sure that those who downvoted me pride themselves a great deal of being overly rational.
It was a classic topic at lunch when I was doing my dissertation and people often cited this study but it’s been a few years now. I found a study that shows that organization and determination were the first factors for pregnant women to succeed
‘Discipline and organization. Many participants (n=18) described a high level of internal discipline and organization that helped them to manage the competing demands of pregnancy/parenting and doctoral work. Participants described carefully organizing their responsibilities and their time in order to be able to complete all required doctoral tasks. For many participants, this organization began during—or even before—their pregnancies. In planning pregnancies, participants looked ahead at program milestones to ensure that a pregnancy would not delay their progression.’
Determination ‘In fact, many participants described an increased determination after they had a child, which motivated them to reorganize their lives or give up leisure time to complete the necessary tasks’ [...] ‘For participants like this one, persisting in the program became not just an individual achievement, but something they were doing for their children as well’ [...] Negative experiences, such as the stress and loss that accompany infertility and/or pregnancy loss, also had the potential to motivate participants to persist
In Mirick, Rebecca & Wladkowski, Stephanie. (2020). Making it Work: Pregnant and Parenting Doctoral Students’ Attributions of Persistence. Advances in Social Work. 19, p. 358.
Yeah, not sure how much this is good news and the level of interference and vested interests that will inevitably come up.
Motivation and productivity hacks made it for me. It all started with a traumatizing event and I ended up developing technics to make it last.
I started a PhD because it was a great opportunity, and I observed two types of PhDs : those who work a lot but not always efficiently, and those who work less but very efficiently. A study shows that women who become mothers during the PhD work less than others but much more efficiently because their time is very limited. Conversely, many people have lots of time (all day) to work on it and get maybe 3-4 hours of productivity maximum because of all that time.
It took a shaking event—being almost fired—to learn to be hard-working. 6 months in the PhD my supervisors told me that I had to redo the report I had worked on until then. In half the time. Otherwise they would fire me. Fine, I did it. Worked 9-12 and 13-17, then 18,30-21. Taking breaks was essential. Work, walk, eat a thing, repeat.
Now I organize my life to work efficiently, as I often realize that I do 80 percent of my work in like 50 percent of my time. So I have deep-work time (3 hours every Tuesday and Thursday), and light-work time where I use pomodoros and most specifically https://www.focusmate.com that is the best productivity I have ever used! Focusmate allowed me to finish my PhD in covid time (read : no motivation at all).
+ one last tip : if you can, put one thing you like to do in your day. Reading an article on the forum, talking to this kind co-worker...At least one thing. It helps a lot mentally.
Thank you very much for the evidence about the funding. OpenPhil has caught up remarkably and I expect many more donors towards longtermism in the future ; GiveWell is excellent but it remains one source and the likelihood that it decreases/doesn’t infuse as much as before remains since it’s more difficult to get funding when there is only one source of funding.
I was indeed wrong to say that longtermism was the most financed area; however, I wouldn’t be surprised if this data changed very fast and the trend reversed next year, given the current circumstances of pushing from the top and hallo effect around longtermism right now.
I don’t want to force myself, but as a community builder, I have to take the leap. Hence my need to understand better how I can get people on board with this.
Thanks! I’ll check them all.
Thanks!
Yeah I get your point and factually sure it is a small group. I still think that for cohesive community purposes advocating for AI within EA would be useful, and finding qualified members to work in AI is easier to do within the community than within the public given the profile of EAS.
As to be aware of the issues that is where we disagree. I don’t think AI has been brought in a careful, thoughtful way, with good epistemics in the community. AI became a thing for specialists and an evidence very quickly, to the detriment of other EAs who have a hard time adjusting. Ignoring this will not lead to good things and should not be undervalued.
Because you leave from the premises that the majority of the EA community is already convinced and into AI already, which I don’t think is true at all, the last post about this showing diagrams of EAs in the community was based purely on intuition and nothing else.
EAs are highly-educated and wealthy people for the vast majority, and their skills are definitely needed in AI. Someone in EA will be much more easily brought onto a job in AI compared to someone who has a vague understanding of it OR doesn’t have the skills. So yes I do think they are in high-leverage positions since they already occupy good jobs.
As to bring the arguments, try going against the grain and expressing doubts on the fastness on how AI took over the EA community, how the funding is now distributed, and how does that feel to see the EA forum having its vast majority of posts dedicated to AI. Many of the EA who do think this way are not on the forum and prefer to stand aside since they don’t feel like they belong. I don’t want to lose these people. And the fact that I am being downvoted to hell every time I dare saying these things is just basic evidence. Everyone who disagrees with me, please explain why instead of just downvoting. That just increases the ‘this is not an opinion we condone’ without any explanation.
Yes my bad!
Would be nice to know what you are basing these diagrams on, other than intuition. If you are very present on the forum and mainly focused on AI of course that is going to be your intuition. Here are the dangers of this intuition I find to exist about this topic :
It’s a self-reenforcing thing : people deep into AI or newly converted are much more likely to think that EA revolves essentially around AI, and people outside of AI might think ‘Oh that’s what the community is about now’ and don’t feel like they belong here. Someone who just lurks out there and see that the forum is now almost exclusively filled with posts on AI will now think that EA is definitely about longtermism.
Funding is also a huge signal. With OpenPhil funding essentially AI and other longtermist projects, for someone who is struggling to find a job (yes we have a few talents who are being aked out everywhere but that’s not the case of the majority even for highly-educated EAs), it is easy to think in a opportunistic way and switch to AI out of necessity instead of conviction, see McAskill quote very relevantly cited by someone in the comments.
And finally, the message given by people at the top. If CEA focuses a lot of Ai career switches and think of other career switches as neutral, of course community builders will focus on AI people. Which means, factually, more men with a STEM background (we have excellent women working at visible and prestigious jobs in AI that’s true, but unless we consciously try to find a concrete way of making women entering the field it is going to be difficult to maintain this, and this is not a priority so far) since the ratio men/women in STEM is still very not in favor of women. The community might thus become even more masculine, and even more STEM (exception made to philosophers and policy-makers but the funds for such jobs are still scarce). I know this isn’t a problem for some here as many of the posts about diversity and their comments attest it, but for those who do see the problem with narrowing down even further, the point is made. And it’s just dangerous to focus on helping people switching to AI if in the need the number of jobs doesn’t grow as expected.
So all the ingredients are there for EA to turn into a practically exclusively AI community, but as D. Nash said, differentiating between the two might actually be more fruitful.
Also I’m not sure that I want to look back in five years and realize that what made the strength of EA—a highly diverse community in terms of interests and centers of impact, and a measurable impact in the world (I might be very wrong here but so far measuring impact for all these new AI orgs is difficult as we clearly lack data and it’s a new field of understanding--, has just disappeared. It’s OK to be seen as nerds and elitist (because let’s face it, that is how EA is seen in the mainstream) is fine as long as we have concrete impact to show for it, but if we become an exclusively technical community that is all about ML and AI governance, it is going to be even more difficult to get traction outside of EA (and we might want to care about that, as explained in a recent post on AI advocacy).
I know I’m going against the grain here, but I like to think that all these ‘EA open to criticism’ thinggy is not a thing of the past. And I truly think that these points need to be addressed, instead of being drown under the new enthusiasm for AI. And if needed to be said : I do realize how important AI is, and how impactful working on it is. I just think that it is not enough to go all-AI, and that many here tend to forget other dynamics and factors playing because of the AI takeover in the community.
I agree! And this might be a hot take (especially for those who are already deep into AI issues), but I also see the need, first and foremost, to advocate for AI within our EA community.
People interacting on this forum do not, IMO, give a fully representative picture of EAs and tend to be very focused on AI while the broader EA community didn’t enter EA for ‘longtermist’ (as much as I hate using this label that could apply to so many causes labelled as neartermists) purposes/did not make the change between what they think is highly impactful and the recent turning point from CEA to focus a large amount of EA resources on longtermism.
People who have been making career switches and reading about global aid/animal welfare who suddenly find out that more than 50 percent of the talks at EA globals and resources are dedicated to AI rather than other causes, are lost. As a community builder, I am in a weird position where I have to explain why and convince many in my local community that EA’s focus is changing (focus coming from the top, the top being closely related to funding decisions etc, not saying these are the same people and it’s obv more complex than that but the change towards longtermism and focus on AI is indisputable) for the better.
This results in many EAs feeling highly skeptical about the new focus. It is good that 80k is making simple videos to explain the risks associated with EA, but I still feel that community epistemics are poor when it comes to justify this change, despite 80k very clear website pages about AI safety. The content is there; outreach, not so much.
And my resulting feeling (because its very hard to have actual numbers to gauge the truth) is that on one side we have AI afficionados, ready to switch careers and already in a deep level of knowledge about these topics (usually with the convenient background in STEM, machine learning etc), the same ones that do comment a lot on the forum, and the rest of the EA community that doesn’t feel much sense of belonging towards the EA community lately. I was planning to write a post about that but I still need to clarify my thoughts and sharpen my arguments, as you can see how poorly structured my comment is.
So I guess that my take is : before (or at the same time, but it seems more strategic for me to do this before in terms of allocating resources) advocating for AI safety outside of the community, let’s do it inside the community.
Footnote : I know about the RethinkPriorities survey that indicates that 70 percent of EAs do consider AI safety as the most impactful thing to work on (I might remember it badly though, not confident at all), but I have my reservations on how representative the survey actually is.
Would love to read some concrete examples of cases where the CHT made a difference, despite the ‘if it’s invisible it means that it is successful’ line of thinking that I fully understand. I also understand why some people say that CHT did not intervene in instances where it would have been necessary because the official line remains ‘CHT isn’t the EA police/conflict solver’ when confronted with these questions.
Yes !!!
Thank you, this is enlightening and helps understand the thinking process when you have been on the other side of the counter as an applicant. So you got 35% of people having experience as senior managers. Is that a confirmation that EA lacks people with experience as senior managers? At EA Sweden we are targeting mid-career professionals for this reason among others, so would be nice to see confirmed or infirmed.
Thanks for your post. Here are a few things that I hope are constructive.
“hey, how do you tell when to release subpar work and when to keep improving?”
This might not work for everybody, but I often get this gut feeling of ‘I could stop working now, even though it is clearly not my best work’ when I finish stuff. So I think that it’s about fighting this impulse of perfectionism to make it it better by focusing on this first assessment of alright-ness. And quite often, I receive very good feedback on such works. Of course I could get even better feedback with even better pieces, but then my mental health sinks.
Next to Andy’s hippy friends I am a titan of industry.
Yeah but the issue is that when it comes to grants or work, you are not competing with Andy’s lovely hippies friends, but with people as intense/if not more intense than you. So I would separate two things here. Comparison is good, when it highlights a strength of yours : like, observing that you social abilities and agreeableness are much higher than many EAs or a few that you have in mind. But when it comes to actual work (I’m thinking technical/niche skills), I agree that comparisons are always a thief of joy. Personally, I tend to always evaluate myself less clever/less technically gifted than many of my colleagues or even people working at positions under mine.
You will never impress them until you give up on doing so.
Yeah I agree! But I’d like to see EA working on being able to compliment others/say when they are impressed by someone instead of thinking of it as a vulnerable thing to do. Of course its vulnerable, but I so often felt impressed towards people, then worried that they didn’t find me good enough, to finally learn by someone else that they were actually impressed by my contributions. This ‘not feeling good enough’ is a big anxiety issue among EA, even those who have an established professional status. Being able to express to each other how impressed we are by each other would probably lessen this anxiety. The few times I tried to do that, people showed signs of embarrassment though, so I’m not sure everyone agrees with me here!
Better yet, get curious about why you don’t seem to want to work on it, with “I hate it and want to quit” being one of many options.
And here I feel particularly targeted when I think about my way-too-long-and-painful PhD process! Some things require spite though, and I’m happy I didn’t give up. But this might also not apply to the majority of things.
Great initiative!
I’m happy to read it went well with the mixing of mid-career professionals and students, since mid-career professionals can feel a bit ‘out of place’ in a very young setting. Did you take concrete steps to avoid this? Or did these people all knew each other from the start, as EA can be a bubble? Do you have an idea of what were the benefits of mixing with students for mid-career people?
Your results are very clearly encouraging, bravo! I’m definitely taking notes to reproduce this at EA Sweden :-)
My point was that it’s not because rishi is in power that he will implement policy in favour of diversity. Actually he stays in the conservative lane that is factually detrimental to diversity (least wealthy populations). Rishi’s agenda is : 1) cutting taxes 2) cutting NHS for long-term disabilities and cutting social allowance for NHS 3) Reduce public spending and implement more austerity politics 4)harsher policies for small boats. Just like Priti Patel and Suella Braveman.
This is very much in line with Truss, Johnson and all conservative ministers. So him being from a different racial background doesn’t influence at all the way he uses his power. He rules as a conservative, for the upper class. Rishi’s main interest has nothing to do with his diversity background but rather the economic class to which he now belongs. The author of the post has now reached a good stage in EA, but he wants to narrow EA, which means that even less people from diverse background will be able to com
That’s what I felt! Do you feel that people with experience in management or leadership could be useful, even if they come from a background of sales or consulting?