For an impact purchase the amount of money is decided based on how good impact of the project was
I’m curious about how exactly this would work. My prior is that impact is clustered at the tails.
This means that there will frequently be small impact projects, and very occasionally be large impact projects—My guess is that if you want to be able to incentivize the frequent small impact projects at all, you won’t be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of their impact, but in practice this means that there’s a cap on how much you can receive from impact purchase.
Maybe a cap is fine, and you know that all you’re ever get from an impact purchase is for instance $50,000, and the prestige comes with what % of impact they bought at that price.
Lest assume for now that impact is clustered as the tails.
(I don’t have a strong prior, but this at least don’t seem implausible to me)
Then how would you like to spend funding? Since there will limited amour of money, what is your motivation for giving the low impact projects anything at all?
Is it to support the people involved to do keep working, and eventually learn and/or get lucky enough to do something really important?
Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1⁄1,000,000 of the people who start such projects can have high impact?
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I’m unsure what size you have in mind when you say small.
I don’t think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters).
I also don’t think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things.
I think the high impact projects are often very risky, and will most likely have low impact.
If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success.
I talk about success rather than impact, because for most project, you’ll never know the actual impact. By “success” I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?)
I’d say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding?
Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?
I think most events will be comparatively low impact compared to the highest impact events. Let’s say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.
Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:
1. Pay all of the events based on their expected impact prior to the events, so the money evens out.
2. Pay a very small amount of money to the other events, so I can afford to pay the two events that had many orders of magnitude higher impact.
3. Only buy a small fraction of the impact of the very high impact events, so I have money left over to pay the small events and can reward them all on impact equally.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.
There is not *the one event* that made the different and all the others where practically useless. That’s not how research work. Sure there are randomness and some meetings are more important than others.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
This is true, much of the networking impact of events is frontloaded.
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
If you’re trying to encourage or motivate people, my very rusty understanding of the psychology literature is that you should give people occasional rewards, rather than systematically rewarding people for what you want. Because the systematic rewards effectively undermine intrinsic motivation—you want people to be focusing on helping people, not meeting your criteria.
I sort of agree with this, but I want to add some things.
I agree that money is not the best motivator. If I was trying to solve [people are not motivated enough] I would probably suggest some community measure rather than a new funding structure.
Money is for buying people the time (i.e. not having do some day-job just to earn a living), or funding other things they need for whatever awesome project they are doing.
However money can defiantly influence motivation. 80k mentions list “Pay you feel is unfair.” as one of four “major negatives” which “tend to be linked to job dissatisfaction.”
Imagine I promise to give £10 to every small impact project, £100 to every medium impact, and £1000 to every large impact. You complete a project. It took you 400 hours of work and you’re very proud of it—you think it’s had a very significant impact. I pay you £10.
How do you think it would feel? How would it affect your future motivation? Are you sure it’s not better to a) get nothing and not have the system of judgement at all, or b) get a surprise thank you note from me with £10 inside that you weren’t expecting?
I think a lot of people spend a lot of time and effort on things that aren’t immediately useful, but I want them to keep their motivation because I believe that one day they may have a hit! If they keep getting £10 cheques for every 10 week cycle of work I’m afraid they’re going to be demotivated.
In this situation I would think you evaluated my project as “small impact” which is possibly useful information, depending on how reliable I think you evaluation is. If I trust your judgement, this would obviously be discouraging, since I though it was much more impressive. But in the end I rather be right then proud, so that I can make sure to do better things in the future.
How I react would also depend on if your £10 is all I get, or if I get £10 each from lots of people, becasue that could potentially add up, maybe?
What it mainly comes down to in the end is: Do I get paid enough to sustainably afford to do this work. Or do I need to focus my effort on getting a paid job instead.
If you are a funder, and you think what I’m doing is good, but not good enough to pay me a liveable wages, then I’d much prefer that you don’t try to encourage me, but instead just be upfront about this. Encouraging people to keep up an unsustainable work situation is exploitative and will backfire in the long run.
I definitely agree with that. But on the other hand, refusing to pay someone who’s good idea didn’t work out and ‘have impact’ for no fault of their own also seems exploitative!
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn’t seem conducive to focusing on impact.
But on the other hand, refusing to pay someone who’s good idea didn’t work out and ‘have impact’ for no fault of their own also seems exploitative!
Letting the person running the project take all the risk, might not be optimal, but I would also say it is not exploitative as long as they know this from the start.
I’m not yet sure if I think the amount of money should be 100% based on actual impact, or if we also want to reward people for project that had high expected impact but low actual impact. The main argument for focusing on actual impact is that it is less objective.
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn’t seem conducive to focusing on impact.
Um, I was going to argue with this. But actually I think you are right.
Something like: “We like what you have done so far, so we will hire you to keep doing good things based on your own best judgment.”
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn’t seem conducive to focusing on impact.
Agreed. In my brief experience with academic consulting one thing I’ve realised is that it is really quite reasonable for contracted consultants to charge a 50-100% premium (on top of their utilisation ratio—usually 50%, so another x2 markup) to account for their lack of benefits.
So if somebody is expecting to earn a ‘fair’ salary from impact purchases compared to employment (or from any other type of short-term contract work really) they should expect a funder to pay premium for this compared to employing them (or funding another organisation to do so) - this doesn’t seem like a good use of funds in the long-term if it is possible to employee that person.
I’m curious about how exactly this would work. My prior is that impact is clustered at the tails.
This means that there will frequently be small impact projects, and very occasionally be large impact projects—My guess is that if you want to be able to incentivize the frequent small
impact projects at all, you won’t be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of their impact, but in practice this means that there’s a cap on how much you can receive from impact purchase.
Maybe a cap is fine, and you know that all you’re ever get from an impact purchase is for instance $50,000, and the prestige comes with what % of impact they bought at that price.
Lest assume for now that impact is clustered as the tails.
(I don’t have a strong prior, but this at least don’t seem implausible to me)
Then how would you like to spend funding? Since there will limited amour of money, what is your motivation for giving the low impact projects anything at all?
Is it to support the people involved to do keep working, and eventually learn and/or get lucky enough to do something really important?
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1⁄1,000,000 of the people who start such projects can have high impact?
I’m unsure what size you have in mind when you say small.
I don’t think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters).
I also don’t think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things.
If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success.
I talk about success rather than impact, because for most project, you’ll never know the actual impact. By “success” I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?)
I’d say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding?
Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?
I think most events will be comparatively low impact compared to the highest impact events. Let’s say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.
Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:
1. Pay all of the events based on their expected impact prior to the events, so the money evens out.
2. Pay a very small amount of money to the other events, so I can afford to pay the two events that had many orders of magnitude higher impact.
3. Only buy a small fraction of the impact of the very high impact events, so I have money left over to pay the small events and can reward them all on impact equally.
Whait what?
100 000 AI Safety Events?
Like 100 000 individual events?
There is a typo here right?
Nope, 1⁄50,000 seems like a realistic ratio for very high impact events to normal impact events.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.
There is not *the one event* that made the different and all the others where practically useless. That’s not how research work. Sure there are randomness and some meetings are more important than others.
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
This is true, much of the networking impact of events is frontloaded.
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
Some relevant articles:
https://forum.effectivealtruism.org/posts/2cWEWqkECHnqzsjDH/doing-good-is-as-good-as-it-ever-was
https://www.independent.co.uk/news/business/analysis-and-features/nassim-taleb-the-black-swan-author-in-praise-of-the-risk-takers-8672186.html
https://foreverjobless.com/ev-millionaires-math/
https://www.facebook.com/yudkowsky/posts/10155299391129228
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
If you’re trying to encourage or motivate people, my very rusty understanding of the psychology literature is that you should give people occasional rewards, rather than systematically rewarding people for what you want. Because the systematic rewards effectively undermine intrinsic motivation—you want people to be focusing on helping people, not meeting your criteria.
I sort of agree with this, but I want to add some things.
I agree that money is not the best motivator. If I was trying to solve [people are not motivated enough] I would probably suggest some community measure rather than a new funding structure.
Money is for buying people the time (i.e. not having do some day-job just to earn a living), or funding other things they need for whatever awesome project they are doing.
However money can defiantly influence motivation. 80k mentions list “Pay you feel is unfair.” as one of four “major negatives” which “tend to be linked to job dissatisfaction.”
https://80000hours.org/2014/09/update-dont-follow-your-passion/
Yes, that’s actually what I’m talking about.
Imagine I promise to give £10 to every small impact project, £100 to every medium impact, and £1000 to every large impact. You complete a project. It took you 400 hours of work and you’re very proud of it—you think it’s had a very significant impact. I pay you £10.
How do you think it would feel? How would it affect your future motivation? Are you sure it’s not better to a) get nothing and not have the system of judgement at all, or b) get a surprise thank you note from me with £10 inside that you weren’t expecting?
I think a lot of people spend a lot of time and effort on things that aren’t immediately useful, but I want them to keep their motivation because I believe that one day they may have a hit! If they keep getting £10 cheques for every 10 week cycle of work I’m afraid they’re going to be demotivated.
In this situation I would think you evaluated my project as “small impact” which is possibly useful information, depending on how reliable I think you evaluation is. If I trust your judgement, this would obviously be discouraging, since I though it was much more impressive. But in the end I rather be right then proud, so that I can make sure to do better things in the future.
How I react would also depend on if your £10 is all I get, or if I get £10 each from lots of people, becasue that could potentially add up, maybe?
What it mainly comes down to in the end is: Do I get paid enough to sustainably afford to do this work. Or do I need to focus my effort on getting a paid job instead.
If you are a funder, and you think what I’m doing is good, but not good enough to pay me a liveable wages, then I’d much prefer that you don’t try to encourage me, but instead just be upfront about this. Encouraging people to keep up an unsustainable work situation is exploitative and will backfire in the long run.
I definitely agree with that. But on the other hand, refusing to pay someone who’s good idea didn’t work out and ‘have impact’ for no fault of their own also seems exploitative!
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn’t seem conducive to focusing on impact.
Letting the person running the project take all the risk, might not be optimal, but I would also say it is not exploitative as long as they know this from the start.
I’m not yet sure if I think the amount of money should be 100% based on actual impact, or if we also want to reward people for project that had high expected impact but low actual impact. The main argument for focusing on actual impact is that it is less objective.
Um, I was going to argue with this. But actually I think you are right.
Something like: “We like what you have done so far, so we will hire you to keep doing good things based on your own best judgment.”
Agreed. In my brief experience with academic consulting one thing I’ve realised is that it is really quite reasonable for contracted consultants to charge a 50-100% premium (on top of their utilisation ratio—usually 50%, so another x2 markup) to account for their lack of benefits.
So if somebody is expecting to earn a ‘fair’ salary from impact purchases compared to employment (or from any other type of short-term contract work really) they should expect a funder to pay premium for this compared to employing them (or funding another organisation to do so) - this doesn’t seem like a good use of funds in the long-term if it is possible to employee that person.