I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)