You could move me by building an explicit quantitative model for a popular question of interest in longtermism that (a) didn’t previously have models (so e.g. patient philanthropy or AI racing doesn’t count), (b) has an upshot that we didn’t previously know via verbal arguments, (c) doesn’t involve subjective personal guesses or averages thereof for important parameters, and (d) I couldn’t immediately tear a ton of holes in that would call the upshot into question.
I feel that (b) identifying a new upshot shouldn’t be necessary; I think it should be enough to build a model with reasonably well-grounded parameters (or well-grounded ranges for them) in a way that substantially affects the beliefs of those most familiar with or working in the area (and maybe enough to change minds about what to work on, within AI or to AI or away from AI). E.g., more explicitly weighing risks of accelerating AI through (some forms of) technical research vs actually making it safer, better grounded weights of catastrophe from AI, a better-grounded model for the marginal impact of work. Maybe this isn’t a realistic goal with currently available information.
Yeah, I agree that would also count (and as you might expect I also agree that it seems quite hard to do).
Basically with (b) I want to get at “the model does something above and beyond what we already had with verbal arguments”; if it substantially affects the beliefs of people most familiar with the field that seems like it meets that criterion.
I feel that (b) identifying a new upshot shouldn’t be necessary; I think it should be enough to build a model with reasonably well-grounded parameters (or well-grounded ranges for them) in a way that substantially affects the beliefs of those most familiar with or working in the area (and maybe enough to change minds about what to work on, within AI or to AI or away from AI). E.g., more explicitly weighing risks of accelerating AI through (some forms of) technical research vs actually making it safer, better grounded weights of catastrophe from AI, a better-grounded model for the marginal impact of work. Maybe this isn’t a realistic goal with currently available information.
Yeah, I agree that would also count (and as you might expect I also agree that it seems quite hard to do).
Basically with (b) I want to get at “the model does something above and beyond what we already had with verbal arguments”; if it substantially affects the beliefs of people most familiar with the field that seems like it meets that criterion.