Unable to work. Was community director of EA Netherlands, had to quit due to ME/CFS (presumably long covid). Everything written since 2021 with considerable brain fog, and bad at maintaining discussions/replying to comments since.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research. Currently most worried about AI and US democracy. Interested in biomedical R&D reform.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
You make a lot of good points—thank you for the elaborate response.
I do think you’re being a little unfair and picking only the worst examples. Most people don’t make millions working on AI safety, and not everything has backfired. AI x-risk is a common topic at AI companies, they’ve signed the CAIS statement that it should be a global priority, technical AI safety has a talent pipeline and is a small but increasingly credible field, to name a few. I don’t think “this is a tricky field to make a robustly positive impact so as a careful person I shouldn’t work on it” is a solid strategy at the individual level, let alone at the community level.
That said, I appreciate your pushback and there’s probably plenty of people working on either cause area for whom personal incentives matter more than philosophical ones.