My name is Leonie, and I’m an Operations generalist looking for a remote ops(-adjacent) role at a high-impact organization! I spent 3.5 years running operations in a $1–2M revenue organization and took on shared leadership during an unplanned absence of company leadership- maintaining continuity and supporting operational and financial priorities.
I came across EA in Fall of last year, through my job search. I was struck by the concept that good intentions aren’t enough, and that we can and should use evidence to figure out where our efforts actually do the most good. Since then, I’ve read career guides and core EA ideas, applied to high-impact jobs, and started engaging with my local EA group. I’m currently halfway through CEA’s High-Impact Carer Pivot Bootcamp (highly recommend!) and new to this Forum.
I have many thoughts and questions about how to pick a cause area, and would love to chat with people working in AI safety/​governance about risk to cause harm, and wonder how pro-AI someone should be to enter (and more importantly, stay (fulfilled) in) the field. If that’s you, please don’t be shy to say hi, I’d love to pick your brain!
Hi everyone 🦋
My name is Leonie, and I’m an Operations generalist looking for a remote ops(-adjacent) role at a high-impact organization! I spent 3.5 years running operations in a $1–2M revenue organization and took on shared leadership during an unplanned absence of company leadership- maintaining continuity and supporting operational and financial priorities.
I came across EA in Fall of last year, through my job search. I was struck by the concept that good intentions aren’t enough, and that we can and should use evidence to figure out where our efforts actually do the most good. Since then, I’ve read career guides and core EA ideas, applied to high-impact jobs, and started engaging with my local EA group. I’m currently halfway through CEA’s High-Impact Carer Pivot Bootcamp (highly recommend!) and new to this Forum.
I have many thoughts and questions about how to pick a cause area, and would love to chat with people working in AI safety/​governance about risk to cause harm, and wonder how pro-AI someone should be to enter (and more importantly, stay (fulfilled) in) the field. If that’s you, please don’t be shy to say hi, I’d love to pick your brain!
Hey Leonie! Welcome to the EA Forum.
I’m around if you ever have questions,
Toby, from the EA Forum team :)