Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory. His work is focused on AI safety and AI risk analysis.
Aryeh Englander
NIST AI Risk Management Framework request for information (RFI)
[Question] Studies / reports on increased wellbeing for extreme poor
[Question] Pros and cons of working on near-term technical AI safety and assurance
[Question] EA intro videos for kids
Day One Project Technology Policy Accelerator
In your 80,000 Hours interview you talked about worldview diversification. You emphasized the distinction between total utilitarianism vs. person-affecting views within the EA community. What about diversification beyond utilitarianism entirely? How would you incorporate other normative ethical views into cause prioritization considerations? (I’m aware that in general this is basically just the question of moral uncertainty, but I’m curious how you and Open Phil view this issue in practice.)
True. My main concern here is the lamppost issue (looking under the lamppost because that’s where the light is). If the unknown unknowns affect the probability distribution, then personally I’d prefer to incorporate that or at least explicitly acknowledge it. Not a critique—I think you do acknowledge it—but just a comment.
Shouldn’t a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?
What skills/types of people do you think AI forecasting needs?
I know you asked Ajeya, but I’m going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we’re going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)
New article from Oren Etzioni
[Question] Brief summary of key disagreements in AI Risk
I know that in the past LessWrong, HPMOR, and similar community-oriented publications have been a significant source of recruitment for areas that MIRI is interested in, such as rationality, EA, awareness of the AI problem, and actual research associates (including yourself, I think). What, if anything, are you planning to do to further support community engagement of this sort? Specifically, as a LW member I’m interested to know if you have any plans to help LW in some way.
Does this look close to like what you’re looking for? https://www.lesswrong.com/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction
If yes, feel free to message me—I’m one of the people running that project.
Also, what software did you use for the map you displayed above?