I agree with most points raised in the answers so far. Some specific points I feel worth mentioning:
I think 1⁄10,000 probability of biorisk this century is too low.
I’m more certain of this if you define biorisk as something like “global catastrophic biorisk that is causally responsible for dooming us” than if you think of it as “literally extinction this century from only bio”
I think it’s probably good for you to explore a bunch and think of interests you have in specific tasks rather than absolute interests in AI alignment vs biorisk as a cause area.
“Follow your gut (among a range of plausible options) rather than penciled Fermis” makes gut sense but pencils badly. I don’t have a strong robust answer to how to settle this deliberation.
Fortunately exploring is a fairly robust strategy under a range of reasonable assumptions.
I agree with most points raised in the answers so far. Some specific points I feel worth mentioning:
I think 1⁄10,000 probability of biorisk this century is too low.
I’m more certain of this if you define biorisk as something like “global catastrophic biorisk that is causally responsible for dooming us” than if you think of it as “literally extinction this century from only bio”
I think it’s probably good for you to explore a bunch and think of interests you have in specific tasks rather than absolute interests in AI alignment vs biorisk as a cause area.
“Follow your gut (among a range of plausible options) rather than penciled Fermis” makes gut sense but pencils badly. I don’t have a strong robust answer to how to settle this deliberation.
Fortunately exploring is a fairly robust strategy under a range of reasonable assumptions.