Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1⁄1,000,000 of the people who start such projects can have high impact?
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I’m unsure what size you have in mind when you say small.
I don’t think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters).
I also don’t think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things.
I think the high impact projects are often very risky, and will most likely have low impact.
If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success.
I talk about success rather than impact, because for most project, you’ll never know the actual impact. By “success” I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?)
I’d say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding?
Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?
I think most events will be comparatively low impact compared to the highest impact events. Let’s say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.
Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:
1. Pay all of the events based on their expected impact prior to the events, so the money evens out.
2. Pay a very small amount of money to the other events, so I can afford to pay the two events that had many orders of magnitude higher impact.
3. Only buy a small fraction of the impact of the very high impact events, so I have money left over to pay the small events and can reward them all on impact equally.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.
There is not *the one event* that made the different and all the others where practically useless. That’s not how research work. Sure there are randomness and some meetings are more important than others.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
This is true, much of the networking impact of events is frontloaded.
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1⁄1,000,000 of the people who start such projects can have high impact?
I’m unsure what size you have in mind when you say small.
I don’t think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters).
I also don’t think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things.
If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success.
I talk about success rather than impact, because for most project, you’ll never know the actual impact. By “success” I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?)
I’d say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding?
Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?
I think most events will be comparatively low impact compared to the highest impact events. Let’s say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.
Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:
1. Pay all of the events based on their expected impact prior to the events, so the money evens out.
2. Pay a very small amount of money to the other events, so I can afford to pay the two events that had many orders of magnitude higher impact.
3. Only buy a small fraction of the impact of the very high impact events, so I have money left over to pay the small events and can reward them all on impact equally.
Whait what?
100 000 AI Safety Events?
Like 100 000 individual events?
There is a typo here right?
Nope, 1⁄50,000 seems like a realistic ratio for very high impact events to normal impact events.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.
There is not *the one event* that made the different and all the others where practically useless. That’s not how research work. Sure there are randomness and some meetings are more important than others.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
This is true, much of the networking impact of events is frontloaded.
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
Some relevant articles:
https://forum.effectivealtruism.org/posts/2cWEWqkECHnqzsjDH/doing-good-is-as-good-as-it-ever-was
https://www.independent.co.uk/news/business/analysis-and-features/nassim-taleb-the-black-swan-author-in-praise-of-the-risk-takers-8672186.html
https://foreverjobless.com/ev-millionaires-math/
https://www.facebook.com/yudkowsky/posts/10155299391129228
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)