You could move me by building an explicit quantitative model for a popular question of interest in longtermism that (a) didnât previously have models (so e.g. patient philanthropy or AI racing doesnât count), (b) has an upshot that we didnât previously know via verbal arguments, (c) doesnât involve subjective personal guesses or averages thereof for important parameters, and (d) I couldnât immediately tear a ton of holes in that would call the upshot into question.
I feel that (b) identifying a new upshot shouldnât be necessary; I think it should be enough to build a model with reasonably well-grounded parameters (or well-grounded ranges for them) in a way that substantially affects the beliefs of those most familiar with or working in the area (and maybe enough to change minds about what to work on, within AI or to AI or away from AI). E.g., more explicitly weighing risks of accelerating AI through (some forms of) technical research vs actually making it safer, better grounded weights of catastrophe from AI, a better-grounded model for the marginal impact of work. Maybe this isnât a realistic goal with currently available information.
Yeah, I agree that would also count (and as you might expect I also agree that it seems quite hard to do).
Basically with (b) I want to get at âthe model does something above and beyond what we already had with verbal argumentsâ; if it substantially affects the beliefs of people most familiar with the field that seems like it meets that criterion.
I feel that (b) identifying a new upshot shouldnât be necessary; I think it should be enough to build a model with reasonably well-grounded parameters (or well-grounded ranges for them) in a way that substantially affects the beliefs of those most familiar with or working in the area (and maybe enough to change minds about what to work on, within AI or to AI or away from AI). E.g., more explicitly weighing risks of accelerating AI through (some forms of) technical research vs actually making it safer, better grounded weights of catastrophe from AI, a better-grounded model for the marginal impact of work. Maybe this isnât a realistic goal with currently available information.
Yeah, I agree that would also count (and as you might expect I also agree that it seems quite hard to do).
Basically with (b) I want to get at âthe model does something above and beyond what we already had with verbal argumentsâ; if it substantially affects the beliefs of people most familiar with the field that seems like it meets that criterion.