It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.
There is not *the one event* that made the different and all the others where practically useless. That’s not how research work. Sure there are randomness and some meetings are more important than others.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
This is true, much of the networking impact of events is frontloaded.
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.
There is not *the one event* that made the different and all the others where practically useless. That’s not how research work. Sure there are randomness and some meetings are more important than others.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
This is true, much of the networking impact of events is frontloaded.
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
Some relevant articles:
https://forum.effectivealtruism.org/posts/2cWEWqkECHnqzsjDH/doing-good-is-as-good-as-it-ever-was
https://www.independent.co.uk/news/business/analysis-and-features/nassim-taleb-the-black-swan-author-in-praise-of-the-risk-takers-8672186.html
https://foreverjobless.com/ev-millionaires-math/
https://www.facebook.com/yudkowsky/posts/10155299391129228
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)