I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
Some relevant articles:
https://forum.effectivealtruism.org/posts/2cWEWqkECHnqzsjDH/doing-good-is-as-good-as-it-ever-was
https://www.independent.co.uk/news/business/analysis-and-features/nassim-taleb-the-black-swan-author-in-praise-of-the-risk-takers-8672186.html
https://foreverjobless.com/ev-millionaires-math/
https://www.facebook.com/yudkowsky/posts/10155299391129228
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)