Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory. His work is focused on AI safety and AI risk analysis.
Aryeh Englander
Meta-comment: I noticed while reading this post and some of the comments that I had a strong urge to upvote any comment that was critical of EA and had some substantive content. Introspecting, I think this was partly due to trying to signal-boost critical comments because I don’t think we get enough of those, partly because I agreed with some of those critiques, … but I think mostly because it feels like part of the EA/rationalist tribal identity that self-critiquing should be virtuous. I also found myself being proud of the community that a critical post like this gets upvoted so much—look how epistemically virtuous we are, we even upvote criticisms!
On the one hand that’s perhaps a bit worrying—are we critiquing and/or upvoting critiques because of the content or because of tribal identity? On the other hand, I suppose if I’m going to have some tribal identity then being part of a tribe where it’s virtuous to give substantive critiques of the tribe is not a bad starting place.
But back on the first hand, I wonder if this would be so upvoted if it came from someone outside of EA, didn’t include things about how the author really agrees with EA overall, and perhaps was written in a more polemical style. Are we only virtuously upvoting critiques from fellow tribe members, but if it came as an attack from outside then our tribal defense instincts would kick in and we would fight against the perceived threat?
[EDIT: To be clear, I am not saying anything about this particular post. I happened to agree with a lot of the content in the OP, and I have voiced these and related concerns several times myself.]
This seems correct and a valid point to keep in mind—but it cuts both ways. It makes sense to reduce your credence when you recognize that expert judgment here is less informed than you originally thought. But by the same token, you should probably reduce your credence in your own forecasts being correct, at least to the extent that they involve inside view arguments like, “deep learning will not scale up all the way because it’s missing xyz.” The correct response in this case will depend on how much your views depend on inside view arguments about deep learning, of course. But I suspect that at least for a lot of people the correct response is to become more agnostic about any timeline forecast, their own included, rather than to think that since the experts aren’t so reliable here, therefore I should just trust my own judgement.
Part-time work is an option at my workplace. Less than half-time loses benefits though, which is why I didn’t want to drop down to lower than 50%.
-
I did not have an advisor when I sent the original email, but I did have what amounted to a standing offer from my undergrad ML professor that if I ever wanted to do a PhD he would take me as a grad student. I spent a good amount of time over the past three months deciding whether I should take him up on that or if I should apply elsewhere. I ended up taking him up on the offer.
-
I did not discuss it with my employer before sending the original email. It did take some work to get it through bureaucratic red tape though (conflict of interest check, etc.).
-
Does this look close to like what you’re looking for? https://www.lesswrong.com/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction
If yes, feel free to message me—I’m one of the people running that project.
Also, what software did you use for the map you displayed above?
In your 80,000 Hours interview you talked about worldview diversification. You emphasized the distinction between total utilitarianism vs. person-affecting views within the EA community. What about diversification beyond utilitarianism entirely? How would you incorporate other normative ethical views into cause prioritization considerations? (I’m aware that in general this is basically just the question of moral uncertainty, but I’m curious how you and Open Phil view this issue in practice.)
True. My main concern here is the lamppost issue (looking under the lamppost because that’s where the light is). If the unknown unknowns affect the probability distribution, then personally I’d prefer to incorporate that or at least explicitly acknowledge it. Not a critique—I think you do acknowledge it—but just a comment.
Shouldn’t a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?
What skills/types of people do you think AI forecasting needs?
I know you asked Ajeya, but I’m going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we’re going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)
I know that in the past LessWrong, HPMOR, and similar community-oriented publications have been a significant source of recruitment for areas that MIRI is interested in, such as rationality, EA, awareness of the AI problem, and actual research associates (including yourself, I think). What, if anything, are you planning to do to further support community engagement of this sort? Specifically, as a LW member I’m interested to know if you have any plans to help LW in some way.
[Disclaimer: I haven’t read the whole post in detail yet, or all the other comments, so apologies if this is mentioned elsewhere. I did see that the Partnerships section talks about something similar, but I’m not sure it’s exactly what I’m referring to here.]
For some of these products there already exists similar software, just that they’re meant for corporations and are really expensive. Just as an example from something I’m familiar with, for building on Guesstimate there’s already Analytica (https://lumina.com/). Now, does it doeverything that Guesstimate does and with all of Guesstimate’s features? Probably not. But on the other hand, a lot of these corporate software systems are meant to be customized for individual corporations’ needs. The companies who build these software platforms employ people whose job consists of customizing the software for particular needs, and there are often independent consultants who will do that for you as well. (My wife does independent customization for some software platforms as part of her general consulting business.)
So, what if we had some EA org buy corporate licenses to some of these platforms and hand them out to other EA orgs as needed? It’s usually (but not always) cheaper to buy and/or modify existing systems than to build your own from scratch, when possible.
Additionally, many of these organizations offer discounts for nonprofits, and some may even be interested in helping directly on their own if approached. For example, I have talked with the Analytica team and they are very interested in some of the AI forecasting work we’ve been doing (https://www.alignmentforum.org/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction), and with the whole EA/LW approach in general.
Will it turn out cheaper to buy licenses and/or modify Analytica for general EA purposes instead of building on Guesstimate? I don’t know, and it will probably depend on the specifics. But I think it’s worth looking into.