In general, I think many people who have the option to join Anthropic could do more altruistically ambitious things, but career decisions should factor in a bunch of information that observers have little access to (e.g. team fit, internal excitement/motivation, exit opportunities from new role …).[1] Joe seems exceptionally thoughtful, altruistic, and earnest, and that makes me feel good about Joe’s move.
I am very excited about posts grappling with career decisions involving AI companies, and would love to see more people write them. Thank you very much for sharing it!
Others in the EA community seem more excited about AI personality shaping than I am. I wouldn’t be surprised if it turned out to be very important, though that’s an argument that rules in a bunch of random, currently unexplored projects.
In general, I think many people who have the option to join Anthropic could do more altruistically ambitious things, but career decisions should factor in a bunch of information that observers have little access to (e.g. team fit, internal excitement/motivation, exit opportunities from new role …).[1] Joe seems exceptionally thoughtful, altruistic, and earnest, and that makes me feel good about Joe’s move.
I am very excited about posts grappling with career decisions involving AI companies, and would love to see more people write them. Thank you very much for sharing it!
Others in the EA community seem more excited about AI personality shaping than I am. I wouldn’t be surprised if it turned out to be very important, though that’s an argument that rules in a bunch of random, currently unexplored projects.