Epistemic status: pepperoni airplane, includes reasoning about my blase and bohemian risk tolerance and career path which probably doesn’t apply to e.g. people with responsibilities. I think it’d be really hard to proceed with this question in a non-anecdotal way, e.g. employers being cagey about the reasons they decline hiring someone due to legal risk as a barrier to creating a dataset.
I took a 6 month sabbatical at EA Hotel to do some AI safety things smack in the middle of what was supposed to be a burgeoning IT career. I received zero career advice telling me to leave that startup after a mere 8 months, but I’m good at independent study and I was finding that in my case the whole “real world jobs teach you more than textbooks” thing was a lie. So off to Blackpool I went, here’s the one consideration: I didn’t feel like my AI Safety Camp Five project had freedom to be too mathy, I felt like I needed to make sure it had a github presence with nice-looking pull requests because while I was earnestly attempting an interesting research problem, I was also partially optimizing for legible portfolio artifacts for when I’d end up back on the job hunt.
When I got home, I had a couple months left of the SERI internship, and toward the end of that I landed an interview at a consultancy for web3 projects (right groupchat right time), and crushed it using some of my EA Hotel activities (the leader of the consultancy ended up mentioning reinforcement learning on sales calls because of my AI Safety Camp Five project, though no customers took him up on it). I kinda borked my SERI project, so took a confidence hit as far as alignment or any kind of direct work was concerned, so retreating into E2G was the move: it was also great brain food and exposed me to generically kickass people. The point is that EA was not a negative signal, even a totally weird-sounding sabbatical at a hotel in a beach town scored no negative points in the eyes of this particular employer. The takeaway about my AI Safety Camp Five project is you can optimize for things legible to normies while doing direct work.
If you have way less bohemian risk tolerance than me, then your EA activities will be way more legible and respectable than mine were at that time.
It’s kind of like what they tell people trying to break into IT from “nontraditional paths”—the interview is all about spin, narrative, confidence. IT managers, in my experience (excuse another pepperoni airplane), can get a ton of useful information from stories about problem solving and conflict resolution that take place in restaurants or film sets! Unless I’m deliberately making the least charitable caricature of HR, I assume that if you talked about some project you tried for a while with this social movement of philosophers trying to fix massive problems in an interview you’d get a great response.
Epistemic status: pepperoni airplane, includes reasoning about my blase and bohemian risk tolerance and career path which probably doesn’t apply to e.g. people with responsibilities. I think it’d be really hard to proceed with this question in a non-anecdotal way, e.g. employers being cagey about the reasons they decline hiring someone due to legal risk as a barrier to creating a dataset.
I took a 6 month sabbatical at EA Hotel to do some AI safety things smack in the middle of what was supposed to be a burgeoning IT career. I received zero career advice telling me to leave that startup after a mere 8 months, but I’m good at independent study and I was finding that in my case the whole “real world jobs teach you more than textbooks” thing was a lie. So off to Blackpool I went, here’s the one consideration: I didn’t feel like my AI Safety Camp Five project had freedom to be too mathy, I felt like I needed to make sure it had a github presence with nice-looking pull requests because while I was earnestly attempting an interesting research problem, I was also partially optimizing for legible portfolio artifacts for when I’d end up back on the job hunt.
When I got home, I had a couple months left of the SERI internship, and toward the end of that I landed an interview at a consultancy for web3 projects (right groupchat right time), and crushed it using some of my EA Hotel activities (the leader of the consultancy ended up mentioning reinforcement learning on sales calls because of my AI Safety Camp Five project, though no customers took him up on it). I kinda borked my SERI project, so took a confidence hit as far as alignment or any kind of direct work was concerned, so retreating into E2G was the move: it was also great brain food and exposed me to generically kickass people. The point is that EA was not a negative signal, even a totally weird-sounding sabbatical at a hotel in a beach town scored no negative points in the eyes of this particular employer. The takeaway about my AI Safety Camp Five project is you can optimize for things legible to normies while doing direct work.
If you have way less bohemian risk tolerance than me, then your EA activities will be way more legible and respectable than mine were at that time.
It’s kind of like what they tell people trying to break into IT from “nontraditional paths”—the interview is all about spin, narrative, confidence. IT managers, in my experience (excuse another pepperoni airplane), can get a ton of useful information from stories about problem solving and conflict resolution that take place in restaurants or film sets! Unless I’m deliberately making the least charitable caricature of HR, I assume that if you talked about some project you tried for a while with this social movement of philosophers trying to fix massive problems in an interview you’d get a great response.
Nothing much more to add except for that I’ll be using “pepperoni airplane” a lot from now on. I agree with your take on nontraditional paths.