I’ve seen similar a few times before and am pretty tired of it at this point
I think I’d sort of encountered the issue theoretically, and maybe some ambiguous cases, but I researched this one at some depth, and it was more shocking.
Fair point on 2. (prediction markets being too restrictive) and 3. ()
4. I think is a feature of the report being aimed at a particular company, so considerations around e.g., office politics making prediction markets fail are still important. As you kind of point out, overall this isn’t really the report I would have written for EA, and I’m glad I got bought out of that.
5. I don’t think this is what we meant, e.g., see:
Like Eli below, I am also in favour of starting with small interventions and titrating one’s way towards more significant ones.
For internal predictions, start with interventions that take the least amount of employee time
I.e., we agree that small experiments (e.g., “Delphi-like automatic prediction markets built on top of dead-simple polls”) are great. This could maybe have been expressed more clearly.
On the other hand, I didn’t really have the impression that there was someone inside Upstart willing to put in the time to do the experiments if we didn’t.
6. Sure. One thing we were afraid was cultures sort of having the incentive to pretend they were more candid that they really are. Social desirability bias feels strong.
7. (experimentation having positive externalities.) Yep!
I think I’d sort of encountered the issue theoretically, and maybe some ambiguous cases, but I researched this one at some depth, and it was more shocking.
Fair point on 2. (prediction markets being too restrictive) and 3. ()
4. I think is a feature of the report being aimed at a particular company, so considerations around e.g., office politics making prediction markets fail are still important. As you kind of point out, overall this isn’t really the report I would have written for EA, and I’m glad I got bought out of that.
5. I don’t think this is what we meant, e.g., see:
I.e., we agree that small experiments (e.g., “Delphi-like automatic prediction markets built on top of dead-simple polls”) are great. This could maybe have been expressed more clearly.
On the other hand, I didn’t really have the impression that there was someone inside Upstart willing to put in the time to do the experiments if we didn’t.
6. Sure. One thing we were afraid was cultures sort of having the incentive to pretend they were more candid that they really are. Social desirability bias feels strong.
7. (experimentation having positive externalities.) Yep!