Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
How important is compute for AI development relative to other inputs? How certain are you of this?
There have been estimates that there are around 100 AI researchers & engineers focused on AI alignment. This seems quite small given the scale of the problem. What are some of the bottlenecks for scaling up, and what is being done to alleviate this?
What opportunities, if any at all, do individual donors (or people who might not have suitable backgrounds for safety/governance careers) have to positively shape the development of AI?
I’m neither a software engineer nor on the job market, but I found myself reading to the end because of how much fun this post is. Well done!
Might also add my totally-completely-absolutely-unbiased opinion that working at CEA/other orgs in the CEA umbrella is amazing.
Thanks for the feedback! I agree, sometimes conveying the potential value of future generations to a general audience can be really tricky. We’re currently working on improving our feedback solicitation process, precisely so we can get input from a wide range of people like you flagged — from highly engaged EAs to members of the general public.
I do think there is a tricky line to balance between going too high level and going too granular when creating longtermist content for a wide audience, but it’s something I think is extremely valuable to figure out and would like for us to continually improve at doing a good job of.
Thanks!
Thank you!
My impression is that I will keep the application open until I’m satisfied that I’ve found a strong team of approximately 2-4 core writers. I’m not quite sure how long that will take, however.
I could also see a desire to scale in the future, so we could onboard on a rolling basis.
This is interesting, Adam. Thanks for sharing. I think you should consider posting this as a standalone piece on the forum, because I can imagine there will be a wide variety of opinions regarding the speed at which EA should grow. What I will say though is that I really like the idea of doing profiles on specific people — e.g., “How this software engineer approaches charity” — in order to relate to a wider audience. I think this is the exact kind of content we’d like to work with our members to produce, so thanks for sharing the idea!
Thanks Peter :)
N=1, but when I applied to Longview Philanthropy, I received some feedback upon request after my work trial.
One thing I would like to add is that I think it is plausible that the results might not be even close to the same if Killingsworth’s study contained responses from folks living in low-income countries. For example, I wouldn’t be surprised if money actually has a much stronger effect on happiness for people earning $500 ~ per year, as things like medicine, food, shelter, sanitation, etc probably bring significantly more happiness than the kinds of things bought by people that earn $400,000+ per year.
Also, even if it does make a small difference (which I find hard to believe at such a low income), you can double, triple, quadruple, etc the income of 100 people earning $500 per year for a lower cost than doubling the income of one person earning $200,000.
Since we can’t directly prove this from Killingsworth’s study — which the blogpost was primarily about — the assumption was that the results would be the same for low-income earners.
Thanks for the kind words Linch! Yes, I agree with the motivated reasoning point. I found myself pretty attached to the $75,000 anecdote (me being a fanboy for Kahneman probably contributed to this) even though it didn’t feel quite right. Glad that this new paper allowed me to update while still aligning with one of my core beliefs.
I think Killingsworth’s study captures the same idea that motivates me to do the GWWC pledge while being a bit more nuanced than Kahneman & Deaton’s study. I really do hope this new research can enrich the EA community’s view on the relationship between money and happiness.
Ah. Duh. My bad!
Right now I’m an MSc student at the Oxford Internet Institute studying part-time for a degree in social science of the internet, with a focus on economics.
I also work on content & research at Giving What We Can, which mostly involves simplifying and translating core EA ideas to something that a general audience would like to read/watch.
This summer I will be self-studying AI governance and then joining GovAI as a summer research fellow. Provided this path seems promising for me, I’m hoping to work at the intersection of policy and research in the AI governance field for the foreseeable future.
I’d like to specifically get better at writing more clearly, critical/creative thinking (using helpful mental models, having better reasoning transparency, and generally being more rational), and researching (more specifically, better at reading/interpreting a lot of existing research and forming my own inside view more quickly). More generally, I think I could probably also use better quantitative skills (economic modelling plus interpreting/working with data/statistics). I could also be a more organised person.
I’d also like to start working on my leadership skills so I’m better prepared for later on when I become a more senior member of whatever team I’m on.