Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
“Pursuing an active campaign” is kind of a weird way to frame someone writing a few tweets and comments about their opinion on something
This rocks, way to go Vincent!!!
Hi Péter, thanks for your comment.
Unfortunately, as you’ve alluded to, technical AI governance talent pipelines are still quite nascent. I’m working on improving this. But in the meantime, I’d recommend:
Speaking with 80,000 Hours (can be useful for connecting you with possible mentors/opportunities)
Regularly browsing the 80,000 Hours job board and applying to the few technical AI governance roles that occasionally pop up on it
Reading 80,000 Hours’ career guide on AI hardware (particularly the bit about how to enter the field) and their write up on policy skills
Hey Jeff, thanks for writing this!
I’m wondering if you’d be willing to opine on what the biggest blockers are for mid-career people who are considering switching to more impactful career paths — particularly those who are not doing things like earning to give, or working on EA causes?
Without getting into whether or not it’s reasonable to expect catastrophe as the default under standard incentives for businesses, I think it’s reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.
If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn’t challenge the general trend of technology being good.
But I think it’s also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the “tech is generally good” heuristic.
This is great! I love the simplicity and how fast and frictionless the experience is.
I think I might be part of the ideal target market, as someone who has long wanted to get more into the habit of concretely writing out his predictions but often lacks the motivation to do so consistently.
Does GWWC currently have a funding gap?
How much would you need to fund the activities you’d ideally like to do over the next two years?
(This can include current and former team members)
When are you gonna go on the 80,000 Hours podcast, Luke? :)
Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.
This is great, thanks for the change. As someone who aspires to use evidence and careful reasoning to determine how to best use my altruistic resources, I sometimes get uncomfortable when people call me an effective altruist.
+1, I would like things like that too. I agree that having much of the great object-level work in the field route through forums (alongside a lot of other material that is not so great) is probably not optimal.
I will say though that going into this, I was not particularly impressed with the suite of beginner articles out there — sans some of Kelsey Piper’s writing — and so I doubt we’re anywhere close to approaching the net-negative territory for the marginal intro piece.
One approach to this might be a soft norm of trying to arxiv-ify things that would be publishable on arxiv without much additional effort.
Very cool! I’m excited to see where this project goes.
Thanks for taking the time to write up your views on this. I’d be keen on reading more posts like this from other folks with backgrounds in ML — particularly those who aren’t already already in the EA/LessWrong/AIS sphere.
Seems right on priors
I’m sorry to hear that you’re stressed and anxious about AI. You’re certainly not alone here, and what you’re feeling is absolutely valid.
More generally, I’d suggest checking out resources from the Mental Health Navigator service. Some of them might be helpful for coping with these feelings.
More specifically, maybe I can offer a take on this events that’s potentially worth considering. One off-the-cuff reaction I’ve had to Bing’s weird, aggressive replies is that they might be good for raising awareness and making the concerns about AI risk much more salient. I’m far more scared about worlds where systems’ bad behaviour is hidden until things get really bad, such that the world is lulled into a false sense of complacency up until that point. Having a very prominent system exhibit odd behaviour could be helpful for galvanising action.
I’m appreciative for Shakeel Hashim. Comms roles seem hard in general. Comms roles for EA seem even harder than that. Comms roles for EA during the last 3 months sound unbelievably hard and stressful.
(Note: Shakeel is a personal friend of mine, but I don’t think that has much influence on how appreciative I am of the work he’s doing, and everyone else managing these crises).
Yeah, fair point. When I wrote this, I roughly followed this process:
Write article
Summarize overall takes in bullet points
Add some probabilities to show roughly how certain I am of those bullet points, where this process was something like “okay I’ll re-read this and see how confident I am that each bullet is true”
I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.
I had the enormous privilege of working at Giving What We Can back in 2021, which was one of my first introductions to the EA community. Needless to say, this experience was formative for my personal journey with effective altruism. I consider Luke an integral part of this.
I can honestly say that I’ve worked with some incredible and brilliant people during my short career, but Luke has really stood out to me as someone who embodies virtue, grace, kindness, compassion, selflessness, and a relentless drive to have a large positive impact on the world.
Luke: thank you for everything you’ve done for both GWWC and the world, and for the incredible impact that I’m confident you will continue to have in the future. I’m sad to imagine a GWWC without you at the helm, but I’m excited to see the great things you’ll end up doing down the line after you’ve had some very well deserved time with your family.