One thing you should consider is that most of the impact is likely to be at the tails. For instance, the distribution of impact for people is probably power-law distributed (this is true in ML in terms of first author citations; I suspect it could be true for safety specifically). From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you. You don’t say that some of the top AI safety orgs are trying to hire you.
Then you have to consider how useful quantum algorithms are to existential risk. Just because people don’t talk about that subject doesn’t mean it’s useless. How many quantum computing PhDs have you seen on the EA forum or met at an EA conference? You are the only one I’ve met. As somebody with unique knowledge, it’s probably worth a pretty significant chunk of time thinking about how it could possibly fit in, getting feedback on your ideas, sharing thoughts with the community, etc.
Then you have to think about how likely quantum computing is likely to make you really rich (probably through equity, not salary) in a period of time where it will matter (e.g. being rich in 5 years is very different from being rich in 50 years).
I think if it’s completely useless for existential risk and is extremely unlikely to make you rich, probably worth pivoting. But consider those questions first, before you give up the chance to be one of the (presumably) very few professional quantum computing researchers in the world.
From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.
I think this is right.
You don’t say that some of the top AI safety orgs are trying to hire you.
I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.
Then you have to consider how useful quantum algorithms are to existential risk.
I think it is quite unlikely that this will be so. I’m 95% sure that QC will not be used in advanced AI, and even if that were the case, it is quite unlikely it will matter for AIS: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial Perhaps I could be surprised, but do we really need someone watch out in case this turns out valuable? My intuition is that if that were to happen I could just learn whatever development has happened quite quickly with my current background. I could spend say, 1-3h a month, and that would probably be enough to be on the watch.
One thing you should consider is that most of the impact is likely to be at the tails. For instance, the distribution of impact for people is probably power-law distributed (this is true in ML in terms of first author citations; I suspect it could be true for safety specifically).
In fact, the reason why I wanted to go for academia, apart from my personal fit, is that the AI Safety community is right now very tilted towards the industry. I think there is a real risk that between blog posts and high-level ideas we could end up with a reputation crisis. We need to be seen as a serious scientific research area, and for that, we need more academic research and way better definitions of the concrete problems we are trying to solve. In other words, if we don’t get over the current `preparadigmaticity’ of the field, we risk reputation damage.
Then you have to think about how likely quantum computing is likely to make you really rich (probably through equity, not salary).
Good question. I have been offered 10k stock options with a value of around $5 to $10 each. Right now the valuation of this startup is in $3B. What do you think?
Also, have you considered 80k advising?
I want to talk to Habiba before making a decision but she was busy this week with EAGx Oxford. Let’s see what she thinks.
One thing you should consider is that most of the impact is likely to be at the tails. For instance, the distribution of impact for people is probably power-law distributed (this is true in ML in terms of first author citations; I suspect it could be true for safety specifically). From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you. You don’t say that some of the top AI safety orgs are trying to hire you.
Then you have to consider how useful quantum algorithms are to existential risk. Just because people don’t talk about that subject doesn’t mean it’s useless. How many quantum computing PhDs have you seen on the EA forum or met at an EA conference? You are the only one I’ve met. As somebody with unique knowledge, it’s probably worth a pretty significant chunk of time thinking about how it could possibly fit in, getting feedback on your ideas, sharing thoughts with the community, etc.
Then you have to think about how likely quantum computing is likely to make you really rich (probably through equity, not salary) in a period of time where it will matter (e.g. being rich in 5 years is very different from being rich in 50 years).
I think if it’s completely useless for existential risk and is extremely unlikely to make you rich, probably worth pivoting. But consider those questions first, before you give up the chance to be one of the (presumably) very few professional quantum computing researchers in the world.
Also, have you considered 80k advising?
I think this is right.
I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.
I think it is quite unlikely that this will be so. I’m 95% sure that QC will not be used in advanced AI, and even if that were the case, it is quite unlikely it will matter for AIS: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial Perhaps I could be surprised, but do we really need someone watch out in case this turns out valuable? My intuition is that if that were to happen I could just learn whatever development has happened quite quickly with my current background. I could spend say, 1-3h a month, and that would probably be enough to be on the watch.
In fact, the reason why I wanted to go for academia, apart from my personal fit, is that the AI Safety community is right now very tilted towards the industry. I think there is a real risk that between blog posts and high-level ideas we could end up with a reputation crisis. We need to be seen as a serious scientific research area, and for that, we need more academic research and way better definitions of the concrete problems we are trying to solve. In other words, if we don’t get over the current `preparadigmaticity’ of the field, we risk reputation damage.
Good question. I have been offered 10k stock options with a value of around $5 to $10 each. Right now the valuation of this startup is in $3B. What do you think?
Thanks Thomas!
Related: https://80000hours.org/articles/applying-an-unusual-skill-to-a-needed-niche/