Agreement karma indicates agreement, separate from overall quality.
I’ve been meaning to write about what it’s like trying to figure out your career direction between 18 and 21 while being part of the EA community—a time that feels, at least for me, like the most uncertain and exploratory part of life. You’re not just asking yourself what you want to do, but also grappling with questions about impact and doing the most good, which adds another hard layer to an already complex period.
For anyone at university who’s getting introduced to EA and feeling overwhelmed about career decisions, I feel like I want to share some thoughts. I’ve been there—feeling unsure about whether I was making the “right” choices or if I was doing enough to have impact. I’ve seen others around that age wrestle with the same questions. I still wrestle with my impact. You have my empathy.
If that’s where you are right now, maybe a few scattered pieces of advice from someone a little further down the road could help. I won’t turn this into a long essay, but if any of this resonates—or if you want more specific guidance—I’d be happy to expand on these thoughts:
1. 80,000 Hours is not gospel: Of course, they don’t claim to be gospel, and explicitly want you to explore different options in your career and provide tools for you to do the thinking. But it’s very easy to just default to their listed career options and cause areas. 80,000 Hours won’t help you make the right choices if you aren’t willing to accept that it’s just one piece of your career puzzle. Most people don’t just end up working on what they want by default, unfortunately.
2. Think beyond conventional impact paths: On that note, if something doesn’t fit neatly into how a typical EA career pans out, that might feel uncomfortable. But that’s okay—outside this community, people do all sorts of things in the world. Going to university doesn’t automatically prepare you for AI policy work or give you operations skills. You’ll probably need to get experience in the outside world that isn’t an EA career path, and that can be hard if all you’ve consumed at university is EA philosophy and its traditional career paths. This is why not pigeon-holing yourself is a good idea (see point 4).
3. Get ready for ego hits: Yes, there’s plenty written about how EA jobs are hard to get, but lots of jobs that seem shiny and potentially useful for career capital will reject you—because you’re not the only one who wants that shiny job. You might get lucky and end up right where you want to be, but you’re probably just like everybody else: inexperienced and trying to make it in the world. Each job application can take weeks of effort. You can make it to interview rounds, all excited about the possibility of doing something you want, only to be rejected because someone else has 10 years more experience than you. This will happen, and it will be hard. You just have to get back up and try again. I found it useful to remind myself every time I got rejected that ‘they can reject me, but they can’t kill my spirit’—and that helped muster up the motivation to push forward.
4. Don’t let EA become your whole life: I switched my degree to be very AI Governance focused (which maybe is paying off), made EA friends, went to EA retreats. It’s so enticing because the university EA community tends to be interesting, thoughtful, and ambitious—that’s pleasant to be around and can mean your life gets wrapped up in it. Getting invited to EA Global conferences in the Bay Area when you’re a twenty-year-old at university hits that status-seeking part of your brain hard. People think it’s really cool, and it feels good when they do. I wish I could say I was above caring what others think, but my brain (like most people’s) is wired to chase social validation at times. While there’s plenty of advice out there about letting go of status-seeking—and you should definitely work on that—I think it’s important to acknowledge how these dynamics can pull you deeper into making EA your whole identity. I strongly suggest building yourself in other communities and finding interests outside the movement. This advice might seem obvious to any adult, but when you’re at university finding your people, and those people happen to offer both intellectual stimulation and status boosts, it’s really easy to stick to the comfortable option.
5. Don’t dismiss grades—they’re part of the bigger picture: I absorbed some wrong advice about grades not mattering through the rationality community. But they do matter: not just for masters applications, but as a signal to employers about your ability to work hard and follow through. Even if EA jobs don’t always list grade requirements, having good grades demonstrates competence and work ethic. More importantly, engaging deeply with your subject teaches you how to tackle difficult problems and work systematically—skills that matter regardless of where you end up. And actually trying with your degree and doing well can make university a much more pleasant experience.
Thank you for writing this! I wish I had internalized some of these points more while I was at university, and guess others will feel the same.
One thing in particular that I recognized is viewing 80,000 Hours (and the EA community more broadly) as offering definitive answers, rather than tools and questions. Looking back, I realize I maybe fell into that mindset. I almost expected that if I just followed the “right” path they laid out and worked as hard as I could, I’d maximize my impact. That was, of course, a very soothing thought, drastically simplifying the complexity of my career choice. But it was also very wrong, and I’m grateful that this quick take is now there to point this out :)
Agreement karma indicates agreement, separate from overall quality.
Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill.
The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn’t work.
Just worth pointing out because it was not obvious to me—the house could add it back, we will still have to wait to see if that happens but seems unlikely.
Agreement karma indicates agreement, separate from overall quality.
Hot take; ultimately this is not a hill I want to die on, and overall I think Bluedot Impact is good for the world. Having interacted with some of the people there, they seem lovely and I don’t want to burn bridges. But I’ve found some of their recent marketing on their website and LinkedIn somewhat aesthetically cringe. It feels like it’s trying very hard to cater to a kind of tech-bro/Silicon Valley speech. Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.
For eg, in hiring for a new tech lead role they have an accompanying blog post that says: “We’re hiring for a Tech Lead. Meet Carol, our ideal candidate.”
Meet Carol, a senior engineer at a Series B startup that’s losing its way. Multiple years experience, previously built 0-to-1 at a failed startup and has multiple side projects others are using. Could make £200k+ at FAANG but chooses impact over money.
“I’m tired of building things nobody cares about. I want to ship things that matter, fast, with people who give a shit.” – Carol, probably
Outcome obsessed, not code precious. Will happily torch 3 months of work if something better emerges. Measures success by user impact, not lines shipped
Post-failure wisdom. Has startup scar tissue. Been sold dreams that evaporated. Now has pattern recognition for what’s real vs what’s venture theatre
Full-stack ownership. Talks to users, analyses data, mocks designs, writes docs. Allergic to “that’s not my job”
Speed fundamentalist—Ships to real users fast. Viscerally hates bureaucracy, long meetings, permission-seeking culture
What They Want
Real users, real impact. “I want to ship something on Monday and see 1000 people use it by Friday”
Clear line to survival. Not another pre-PMF prayer circle. Evidence of traction, revenue, or at minimum a brutally honest path to it.
Mission that matters. Not another ad-tech optimisation tool or crypto dashboard that makes the world slightly worse.
Speedy by default. Where “let’s just try it” beats “let’s have another meeting about it”
What They’ll Trade
Will grind when it matters—Happy to pull long hours for launches, crises, or breakthrough moments. Not for theatre
Will learn anything useful—New stack? Fine. New domain? Fine. As long as it’s not resume-driven development
Will work with ambiguity—But not chaos. There’s a difference between startup scrappiness and headless chicken syndrome
Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.
Plausibly useful feedback, but I think this is ~0 evidence for how much faith you should have in blue dot relative to factors like reach, content, funding, materials, testimonials, reputation, public writing, past work of team members… If. I were doing a grant evaluation of Blue Dot, it seems highly unlikely that this would make it into the eval
I think it’s counterproductive to criticize organizations that are beginning to do more marketing/outreach. This type of criticism reinforces EA norms that it’s good to build something, but not good to promote or talk about it. Those norms are a contributing factor to EA being so behind on this type of work.
We should be celebrating organizations that are making an effort on this and encouraging others to do more.
It surprises me that this is seen as the norm—it feels almost antithetical to having impact if you never talk about what you’re doing. At the same time, a lot of EA orgs seem to have put serious effort into marketing in recent years (GWWC, 80k, EA Globals, etc.), and I think that’s good.
To be clear, I’m not saying it’s bad to talk about what you’re doing. My concern is more subjective—it’s about the style of marketing. Some of it mimics a kind of entrepreneurial/tech-speak that I personally find aversive. That might just be because this creates an association with Silicon Valley’s culture that has driven AI progress in risky ways, so I react strongly to the vibe. But ultimately, Bluedot may be right that this style resonates with the people they want to hire. If so, great—I’m very open to the idea that my subjective reaction doesn’t line up with what’s impactful.
Re: ‘We should be celebrating organisations that are making an effort on this and encouraging others to do more’ — sure, though I think we may be talking past each other. I agree marketing is important: your ideas won’t have much effect if nobody knows about them. But I’m not for default celebration. Sometimes marketing is misleading, manipulative, or just feels icky, and the value really depends on the context. I’m much more inclined to celebrate marketing that pushes in the direction of truth-seeking. Too often, marketing goes the opposite way. (That’s a general comment, not aimed at Bluedot specifically or any other EA-adjacement org for that matter.)
I totally understand the thinking here and agree it makes sense when you look at it at the individual level. But if you zoom out, the upshot of a community so focused on nitpicks like this is that people leading EA orgs are nervous to say anything about their orgs or work. This leads to research being under distributed, fellowships and courses being under subscribed, ideas largely staying within the community, etc.
EA orgs aren’t going to get better at this work without making some attempts, and right now the incentives are so stacked against trying (because of the nitpick culture) that it’s systematically neglected. I think BlueDot deserves a lot of credit for being willing to try new things.
I disagree completely. The goal of a job ad should be to turn off candidates who are not a good fit so they don’t bother applying, and turn on applicants who would be a good fit.
Their job ad being divisive is a good thing, if it is effective at filtering for the people they are looking for.
I think that the use of an LLM here embodies what they are prioritizing: speed and results-orientation.
The blog post pattern-matches to AI-speak, but clearly communicates what they are looking for. If anything, I would update positively for the prudent use of AI here.
I think it’s possible to gain the efficiency of using LLM assistance without sacrificing style/tone — it just requires taste and more careful prompting/context, which seems worth it for a job ad. Maybe it works for their intended audience, but puts me off.
Agreement karma indicates agreement, separate from overall quality.
Most good AI Governance takes are not on the EA Forum, or Lesswrong. They exist on Substack! (and on X where they get reposted and turned into threads). You should consider exploring the AI Governance substack space more. Some examples: Anton Leicht—Threading the Needle; Miles Brundage.
Agreement karma indicates agreement, separate from overall quality.
Suggestion: Enlarge the font size for pronouns on EA Global/EA retreat name cards
There was a period when I used they/them pronouns and was frequently misgendered at EA events. This likely occurred because I present as male, but regardless, it was a frustrating experience. I often find it difficult to correct people and explicitly mention my preferred pronouns, especially in socially taxing environments like EAGs or retreats. Increasing the size of the pronouns on name cards could be helpful.
Agreement karma indicates agreement, separate from overall quality.
I run frequently, and it would be nice to eventually see more GiveWell-recommended charities represented at marathon events in the UK. For example, I didn’t get a place through the ballot for the London Marathon, but I could still obtain a charity place. However, I don’t find any of the available charities particularly appealing to fundraise for, and I wish orgs like the Against Malaria Foundation were offered instead.
The value to the charity consists of both the funds counterfactually raised through the race and the value of the fundraising leads generated through the runner’s activity. I’m curious about what BOTEC someone who knows more about the marathon-fundraising model than I might come up with. My off-the-cuff guess is that, to make the effort cost-effective enough, the charity would need a critical mass of runners who were (a) sufficiently invested in the charity to appear credible to their networks (vs. using it more as a way to gain entry) and (b) could tap wealthy-enough fundraising networks to generate significant post-race expected value.
Agreement karma indicates agreement, separate from overall quality.
(I am mostly articulating feelings here. I am unsure about what I think should change).
I am somewhat disappointed with the way Manifund has turned out. This isn’t to critique the manifund team or that regranting as an idea is bad, but after a few months of excitement and momentum, things have somewhat decelerated. While you get the occasional cool projects, most of the projects on the website don’t seem particularly impressive to me. I also feel like some of the regrantors seem slow to move money, but it could be that the previous problem is feeding into this.
Agreement karma indicates agreement, separate from overall quality.
I am now trying to make YouTube videos explaining AI Governance. Here is a video on RSPs. The video has a few problems, and the editing is sometimes choppy. This can be a fun hobby, and I could improve on skills that seem useful to have. The first is having the confidence to talk to a camera. If you have feedback, here is a form.
For a first video, I thought it was surprisingly good! :) I appreciate that you speak clearly, the script is pretty short and to the point, and honestly I thought the editing was way better than most of YouTube (you cut enough to keep it moving, but not too much as to be being annoying or distracting). There were a couple times I felt like you could have edited it down more. I liked the infographic cut-ins, and you could probably add slightly more visual aids before it gets to be too many.
I’m glad you enjoy making them, and I encourage you to keep doing it!
Agreement karma indicates agreement, separate from overall quality.
I’d like to get opinions on something. I’m planning to experiment with making YouTube videos on AI Governance over the next month or two. Ideally, I want people to see these videos so I can get feedback or get told that I’ve said something incorrect, which is helpful for correcting my own model around things.
I’d share these videos by posting on the EA Forum, but I’m unsure about the best approach:
a) Posting on the frontpage feels like seeking attention or promoting for views, especially since I’m new to video-making and don’t expect high quality initially. b) Posting as personal blog posts seems less intrusive, as only those who opt to see personal posts will see them. This feels like I have “permission” to make noise and is less intimidating. C) Putting them in my quick takes section, which is currently my default, would be even more out of the way.
Given my account’s karma, my posts typically start with 4 or 5 karma and stay on the frontpage for a few hours by default. I think the forum has improved a lot recently—there’s less volume of posts and more interesting discussions. I don’t want to create noise each time I make a video.
However, each video is relevant to the EA community. If people don’t like a video, it’ll naturally move off the front page fairly soon. I’m more likely to get views if I don’t make it a personal blog post or update my quick takes. These views are important to me because they mean more interesting feedback and a higher likelihood that I’ll improve at making videos. (Also given I am only human, more views and engagement means more motivation to keep making things).
I’d appreciate others’ opinions on this. I recognise that part of my hesitation probably stems from a lack of confidence and fear of others’ opinions, but I don’t think these are necessarily good justifications for my decision.
If it were me, I would default to posting them as quick takes. I think that would get them more visibility than a personal blog post (not sure), and quick takes are a good fit for asking for feedback on more early stage things.
But I am somewhat biased because I’m pretty scared to publish frontpage posts, and I don’t want to discourage you from posting it there, especially if you are willing to put in some additional effort to make it valuable for frontpage readers (such as by including a written version of the contents, or by asking for specific feedback in the post, or framing your post as a discussion about the video topic that people can continue in the comments). As you say, in the worst case, if it doesn’t get many upvotes, it will fall off pretty quickly.
On another note, I think the Forum isn’t currently that well-suited for sharing video content, so if you have suggestions for how we can do better there, let me know! :)
Agreement karma indicates agreement, separate from overall quality.
I wonder if anyone has examined the pros and cons of protesting against AI labs? I have seen a lot of people uncertain about this. It may be useful to have someone have a post up, having done maybe <10 hours of thinking on this.
Agreement karma indicates agreement, separate from overall quality.
I’m doing some thinking on the prospects for international cooperation on AI safety, particularly potential agreements to slow down risky AI progress like CHARTS. Does anyone know of a good website or resource that summarizes different countries’ current views and policies regarding deliberately slowing AI progress? For example, something laying out which governments seem open to restrictive policies or agreements to constrain the development of advanced AI (like the EU?) versus which ones want to charge full steam ahead, no matter the risks. Or which countries seem undecided or could be persuaded. Basically, I’m looking for something that synthesizes various countries’ attitudes and stated priorities when it comes to potentially regulating the pace of AI advancement, especially policies that could slow the race to AGI. Let me know if you have any suggestions!
Not exactly what you’re looking for (because it focuses on the US and China rather than giving an overview of lots of countries), but you might find “Prospects for AI safety agreements between countries” useful if you haven’t already read it, particularly the section on CHARTS.
26 votes
Overall karma indicates overall quality.
Total points: 12
Agreement karma indicates agreement, separate from overall quality.
I’ve been meaning to write about what it’s like trying to figure out your career direction between 18 and 21 while being part of the EA community—a time that feels, at least for me, like the most uncertain and exploratory part of life. You’re not just asking yourself what you want to do, but also grappling with questions about impact and doing the most good, which adds another hard layer to an already complex period.
For anyone at university who’s getting introduced to EA and feeling overwhelmed about career decisions, I feel like I want to share some thoughts. I’ve been there—feeling unsure about whether I was making the “right” choices or if I was doing enough to have impact. I’ve seen others around that age wrestle with the same questions. I still wrestle with my impact. You have my empathy.
If that’s where you are right now, maybe a few scattered pieces of advice from someone a little further down the road could help. I won’t turn this into a long essay, but if any of this resonates—or if you want more specific guidance—I’d be happy to expand on these thoughts:
1. 80,000 Hours is not gospel: Of course, they don’t claim to be gospel, and explicitly want you to explore different options in your career and provide tools for you to do the thinking. But it’s very easy to just default to their listed career options and cause areas. 80,000 Hours won’t help you make the right choices if you aren’t willing to accept that it’s just one piece of your career puzzle. Most people don’t just end up working on what they want by default, unfortunately.
2. Think beyond conventional impact paths: On that note, if something doesn’t fit neatly into how a typical EA career pans out, that might feel uncomfortable. But that’s okay—outside this community, people do all sorts of things in the world. Going to university doesn’t automatically prepare you for AI policy work or give you operations skills. You’ll probably need to get experience in the outside world that isn’t an EA career path, and that can be hard if all you’ve consumed at university is EA philosophy and its traditional career paths. This is why not pigeon-holing yourself is a good idea (see point 4).
3. Get ready for ego hits: Yes, there’s plenty written about how EA jobs are hard to get, but lots of jobs that seem shiny and potentially useful for career capital will reject you—because you’re not the only one who wants that shiny job. You might get lucky and end up right where you want to be, but you’re probably just like everybody else: inexperienced and trying to make it in the world. Each job application can take weeks of effort. You can make it to interview rounds, all excited about the possibility of doing something you want, only to be rejected because someone else has 10 years more experience than you. This will happen, and it will be hard. You just have to get back up and try again. I found it useful to remind myself every time I got rejected that ‘they can reject me, but they can’t kill my spirit’—and that helped muster up the motivation to push forward.
4. Don’t let EA become your whole life: I switched my degree to be very AI Governance focused (which maybe is paying off), made EA friends, went to EA retreats. It’s so enticing because the university EA community tends to be interesting, thoughtful, and ambitious—that’s pleasant to be around and can mean your life gets wrapped up in it. Getting invited to EA Global conferences in the Bay Area when you’re a twenty-year-old at university hits that status-seeking part of your brain hard. People think it’s really cool, and it feels good when they do. I wish I could say I was above caring what others think, but my brain (like most people’s) is wired to chase social validation at times. While there’s plenty of advice out there about letting go of status-seeking—and you should definitely work on that—I think it’s important to acknowledge how these dynamics can pull you deeper into making EA your whole identity. I strongly suggest building yourself in other communities and finding interests outside the movement. This advice might seem obvious to any adult, but when you’re at university finding your people, and those people happen to offer both intellectual stimulation and status boosts, it’s really easy to stick to the comfortable option.
5. Don’t dismiss grades—they’re part of the bigger picture: I absorbed some wrong advice about grades not mattering through the rationality community. But they do matter: not just for masters applications, but as a signal to employers about your ability to work hard and follow through. Even if EA jobs don’t always list grade requirements, having good grades demonstrates competence and work ethic. More importantly, engaging deeply with your subject teaches you how to tackle difficult problems and work systematically—skills that matter regardless of where you end up. And actually trying with your degree and doing well can make university a much more pleasant experience.
Hope that’s useful to somebody.
8 votes
Overall karma indicates overall quality.
Total points: 4
Agreement karma indicates agreement, separate from overall quality.
Thank you for writing this! I wish I had internalized some of these points more while I was at university, and guess others will feel the same.
One thing in particular that I recognized is viewing 80,000 Hours (and the EA community more broadly) as offering definitive answers, rather than tools and questions. Looking back, I realize I maybe fell into that mindset. I almost expected that if I just followed the “right” path they laid out and worked as hard as I could, I’d maximize my impact. That was, of course, a very soothing thought, drastically simplifying the complexity of my career choice. But it was also very wrong, and I’m grateful that this quick take is now there to point this out :)
17 votes
Overall karma indicates overall quality.
Total points: 2
Agreement karma indicates agreement, separate from overall quality.
Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill.
The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn’t work.
https://www.yahoo.com/news/us-senate-strikes-ai-regulation-085758901.html?guccounter=1
2 votes
Overall karma indicates overall quality.
Total points: 1
Agreement karma indicates agreement, separate from overall quality.
Just worth pointing out because it was not obvious to me—the house could add it back, we will still have to wait to see if that happens but seems unlikely.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
I’m surprised the vote was so close to unanimous!
15 votes
Overall karma indicates overall quality.
Total points: 21
Agreement karma indicates agreement, separate from overall quality.
Hot take; ultimately this is not a hill I want to die on, and overall I think Bluedot Impact is good for the world. Having interacted with some of the people there, they seem lovely and I don’t want to burn bridges. But I’ve found some of their recent marketing on their website and LinkedIn somewhat aesthetically cringe. It feels like it’s trying very hard to cater to a kind of tech-bro/Silicon Valley speech. Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.
For eg, in hiring for a new tech lead role they have an accompanying blog post that says: “We’re hiring for a Tech Lead. Meet Carol, our ideal candidate.”
Meet Carol, a senior engineer at a Series B startup that’s losing its way. Multiple years experience, previously built 0-to-1 at a failed startup and has multiple side projects others are using. Could make £200k+ at FAANG but chooses impact over money.
“I’m tired of building things nobody cares about. I want to ship things that matter, fast, with people who give a shit.” – Carol, probably
Outcome obsessed, not code precious. Will happily torch 3 months of work if something better emerges. Measures success by user impact, not lines shipped
Post-failure wisdom. Has startup scar tissue. Been sold dreams that evaporated. Now has pattern recognition for what’s real vs what’s venture theatre
Full-stack ownership. Talks to users, analyses data, mocks designs, writes docs. Allergic to “that’s not my job”
Speed fundamentalist—Ships to real users fast. Viscerally hates bureaucracy, long meetings, permission-seeking culture
What They Want
Real users, real impact. “I want to ship something on Monday and see 1000 people use it by Friday”
Clear line to survival. Not another pre-PMF prayer circle. Evidence of traction, revenue, or at minimum a brutally honest path to it.
Mission that matters. Not another ad-tech optimisation tool or crypto dashboard that makes the world slightly worse.
Speedy by default. Where “let’s just try it” beats “let’s have another meeting about it”
What They’ll Trade
Will grind when it matters—Happy to pull long hours for launches, crises, or breakthrough moments. Not for theatre
Will learn anything useful—New stack? Fine. New domain? Fine. As long as it’s not resume-driven development
Will work with ambiguity—But not chaos. There’s a difference between startup scrappiness and headless chicken syndrome
This also reads a bit like how LLMs write.
5 votes
Overall karma indicates overall quality.
Total points: 7
Agreement karma indicates agreement, separate from overall quality.
Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.
Plausibly useful feedback, but I think this is ~0 evidence for how much faith you should have in blue dot relative to factors like reach, content, funding, materials, testimonials, reputation, public writing, past work of team members… If. I were doing a grant evaluation of Blue Dot, it seems highly unlikely that this would make it into the eval
8 votes
Overall karma indicates overall quality.
Total points: 19
Agreement karma indicates agreement, separate from overall quality.
I think it’s counterproductive to criticize organizations that are beginning to do more marketing/outreach. This type of criticism reinforces EA norms that it’s good to build something, but not good to promote or talk about it. Those norms are a contributing factor to EA being so behind on this type of work.
We should be celebrating organizations that are making an effort on this and encouraging others to do more.
11 votes
Overall karma indicates overall quality.
Total points: 9
Agreement karma indicates agreement, separate from overall quality.
It surprises me that this is seen as the norm—it feels almost antithetical to having impact if you never talk about what you’re doing. At the same time, a lot of EA orgs seem to have put serious effort into marketing in recent years (GWWC, 80k, EA Globals, etc.), and I think that’s good.
To be clear, I’m not saying it’s bad to talk about what you’re doing. My concern is more subjective—it’s about the style of marketing. Some of it mimics a kind of entrepreneurial/tech-speak that I personally find aversive. That might just be because this creates an association with Silicon Valley’s culture that has driven AI progress in risky ways, so I react strongly to the vibe. But ultimately, Bluedot may be right that this style resonates with the people they want to hire. If so, great—I’m very open to the idea that my subjective reaction doesn’t line up with what’s impactful.
Re: ‘We should be celebrating organisations that are making an effort on this and encouraging others to do more’ — sure, though I think we may be talking past each other. I agree marketing is important: your ideas won’t have much effect if nobody knows about them. But I’m not for default celebration. Sometimes marketing is misleading, manipulative, or just feels icky, and the value really depends on the context. I’m much more inclined to celebrate marketing that pushes in the direction of truth-seeking. Too often, marketing goes the opposite way. (That’s a general comment, not aimed at Bluedot specifically or any other EA-adjacement org for that matter.)
3 votes
Overall karma indicates overall quality.
Total points: 3
Agreement karma indicates agreement, separate from overall quality.
I totally understand the thinking here and agree it makes sense when you look at it at the individual level. But if you zoom out, the upshot of a community so focused on nitpicks like this is that people leading EA orgs are nervous to say anything about their orgs or work. This leads to research being under distributed, fellowships and courses being under subscribed, ideas largely staying within the community, etc.
EA orgs aren’t going to get better at this work without making some attempts, and right now the incentives are so stacked against trying (because of the nitpick culture) that it’s systematically neglected. I think BlueDot deserves a lot of credit for being willing to try new things.
3 votes
Overall karma indicates overall quality.
Total points: 6
Agreement karma indicates agreement, separate from overall quality.
I disagree completely. The goal of a job ad should be to turn off candidates who are not a good fit so they don’t bother applying, and turn on applicants who would be a good fit.
Their job ad being divisive is a good thing, if it is effective at filtering for the people they are looking for.
2 votes
Overall karma indicates overall quality.
Total points: 4
Agreement karma indicates agreement, separate from overall quality.
I think that the use of an LLM here embodies what they are prioritizing: speed and results-orientation.
The blog post pattern-matches to AI-speak, but clearly communicates what they are looking for. If anything, I would update positively for the prudent use of AI here.
4 votes
Overall karma indicates overall quality.
Total points: 3
Agreement karma indicates agreement, separate from overall quality.
I think it’s possible to gain the efficiency of using LLM assistance without sacrificing style/tone — it just requires taste and more careful prompting/context, which seems worth it for a job ad. Maybe it works for their intended audience, but puts me off.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
Sure—I am not per se bothered that much by AI speak. It seems like a reasonable trade-off.
11 votes
Overall karma indicates overall quality.
Total points: 5
Agreement karma indicates agreement, separate from overall quality.
Most good AI Governance takes are not on the EA Forum, or Lesswrong. They exist on Substack! (and on X where they get reposted and turned into threads). You should consider exploring the AI Governance substack space more. Some examples: Anton Leicht—Threading the Needle; Miles Brundage.
4 votes
Overall karma indicates overall quality.
Total points: 5
Agreement karma indicates agreement, separate from overall quality.
Suggestion: Enlarge the font size for pronouns on EA Global/EA retreat name cards
There was a period when I used they/them pronouns and was frequently misgendered at EA events. This likely occurred because I present as male, but regardless, it was a frustrating experience. I often find it difficult to correct people and explicitly mention my preferred pronouns, especially in socially taxing environments like EAGs or retreats. Increasing the size of the pronouns on name cards could be helpful.
5 votes
Overall karma indicates overall quality.
Total points: 3
Agreement karma indicates agreement, separate from overall quality.
I run frequently, and it would be nice to eventually see more GiveWell-recommended charities represented at marathon events in the UK. For example, I didn’t get a place through the ballot for the London Marathon, but I could still obtain a charity place. However, I don’t find any of the available charities particularly appealing to fundraise for, and I wish orgs like the Against Malaria Foundation were offered instead.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
The value to the charity consists of both the funds counterfactually raised through the race and the value of the fundraising leads generated through the runner’s activity. I’m curious about what BOTEC someone who knows more about the marathon-fundraising model than I might come up with. My off-the-cuff guess is that, to make the effort cost-effective enough, the charity would need a critical mass of runners who were (a) sufficiently invested in the charity to appear credible to their networks (vs. using it more as a way to gain entry) and (b) could tap wealthy-enough fundraising networks to generate significant post-race expected value.
6 votes
Overall karma indicates overall quality.
Total points: 2
Agreement karma indicates agreement, separate from overall quality.
(I am mostly articulating feelings here. I am unsure about what I think should change).
I am somewhat disappointed with the way Manifund has turned out. This isn’t to critique the manifund team or that regranting as an idea is bad, but after a few months of excitement and momentum, things have somewhat decelerated. While you get the occasional cool projects, most of the projects on the website don’t seem particularly impressive to me. I also feel like some of the regrantors seem slow to move money, but it could be that the previous problem is feeding into this.
4 votes
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
I am now trying to make YouTube videos explaining AI Governance. Here is a video on RSPs. The video has a few problems, and the editing is sometimes choppy. This can be a fun hobby, and I could improve on skills that seem useful to have. The first is having the confidence to talk to a camera. If you have feedback, here is a form.
3 votes
Overall karma indicates overall quality.
Total points: 1
Agreement karma indicates agreement, separate from overall quality.
For a first video, I thought it was surprisingly good! :) I appreciate that you speak clearly, the script is pretty short and to the point, and honestly I thought the editing was way better than most of YouTube (you cut enough to keep it moving, but not too much as to be being annoying or distracting). There were a couple times I felt like you could have edited it down more. I liked the infographic cut-ins, and you could probably add slightly more visual aids before it gets to be too many.
I’m glad you enjoy making them, and I encourage you to keep doing it!
4 votes
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
I’d like to get opinions on something. I’m planning to experiment with making YouTube videos on AI Governance over the next month or two. Ideally, I want people to see these videos so I can get feedback or get told that I’ve said something incorrect, which is helpful for correcting my own model around things.
I’d share these videos by posting on the EA Forum, but I’m unsure about the best approach:
a) Posting on the frontpage feels like seeking attention or promoting for views, especially since I’m new to video-making and don’t expect high quality initially.
b) Posting as personal blog posts seems less intrusive, as only those who opt to see personal posts will see them. This feels like I have “permission” to make noise and is less intimidating.
C) Putting them in my quick takes section, which is currently my default, would be even more out of the way.
Given my account’s karma, my posts typically start with 4 or 5 karma and stay on the frontpage for a few hours by default. I think the forum has improved a lot recently—there’s less volume of posts and more interesting discussions. I don’t want to create noise each time I make a video.
However, each video is relevant to the EA community. If people don’t like a video, it’ll naturally move off the front page fairly soon. I’m more likely to get views if I don’t make it a personal blog post or update my quick takes. These views are important to me because they mean more interesting feedback and a higher likelihood that I’ll improve at making videos. (Also given I am only human, more views and engagement means more motivation to keep making things).
I’d appreciate others’ opinions on this. I recognise that part of my hesitation probably stems from a lack of confidence and fear of others’ opinions, but I don’t think these are necessarily good justifications for my decision.
3 votes
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
If it were me, I would default to posting them as quick takes. I think that would get them more visibility than a personal blog post (not sure), and quick takes are a good fit for asking for feedback on more early stage things.
But I am somewhat biased because I’m pretty scared to publish frontpage posts, and I don’t want to discourage you from posting it there, especially if you are willing to put in some additional effort to make it valuable for frontpage readers (such as by including a written version of the contents, or by asking for specific feedback in the post, or framing your post as a discussion about the video topic that people can continue in the comments). As you say, in the worst case, if it doesn’t get many upvotes, it will fall off pretty quickly.
On another note, I think the Forum isn’t currently that well-suited for sharing video content, so if you have suggestions for how we can do better there, let me know! :)
2 votes
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
I wonder if anyone has examined the pros and cons of protesting against AI labs? I have seen a lot of people uncertain about this. It may be useful to have someone have a post up, having done maybe <10 hours of thinking on this.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
I’m doing some thinking on the prospects for international cooperation on AI safety, particularly potential agreements to slow down risky AI progress like CHARTS. Does anyone know of a good website or resource that summarizes different countries’ current views and policies regarding deliberately slowing AI progress? For example, something laying out which governments seem open to restrictive policies or agreements to constrain the development of advanced AI (like the EU?) versus which ones want to charge full steam ahead, no matter the risks. Or which countries seem undecided or could be persuaded. Basically, I’m looking for something that synthesizes various countries’ attitudes and stated priorities when it comes to potentially regulating the pace of AI advancement, especially policies that could slow the race to AGI. Let me know if you have any suggestions!
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
Not exactly what you’re looking for (because it focuses on the US and China rather than giving an overview of lots of countries), but you might find “Prospects for AI safety agreements between countries” useful if you haven’t already read it, particularly the section on CHARTS.