Unless I’m missing something, thin markets and the difficulties of measuring value in sufficiently precise ways should be enough to mostly doom futarchy attempts in EA organizations.
Advisory markets or just frequent betting seems more plausible but still hard.
For example, if we try to do a market on whether or not Lizka should write this post or not, I just don’t (currently) see a way for us to have sufficiently large and sufficiently precise definitions of welfare to make welfare conditional predictions on whether or not Lizka should do this post.
However, I can imagine some betting or a very lightweight prediction market to resolve disagreements on specific interesting proxies(eg “will this post have >50 karma”, “will any work be built on top of this in <3 years”, “will Lizka think this post is a good use of her time 2 months after publication”, “will this be complete by Y date”) , in addition to the project forecasts we currently have.
More generally I’m skeptical that markets are an unusually efficient way to convert information into prices in smaller ecosystems. Like it’s very rare that stuff like internal hiring and conference room bookings within a company is allocated through prices, and the few exceptions I’m aware of do not, to the best of my knowledge, become unusually successful companies.
I basically agree with Linch’s answer, and just want to add that a futarchy-like system (or even, likely, coherent use of prediction markets) would require a lot of management/organizational support (in addition to subsidization, probably, to push back against thin markets), and management/operations already seems like a current bottleneck in EA.
(I’m also unconvinced that EA is the best place to kickstart something like using prediction markets, since people in EA are presumably already incentivized to make decisions that are likely to produce good outcomes and to share information they feel is relevant to those decisions. The strength of futarchy is (in theory) channeling private monetary/profit incentives towards common values or a kind of communal good, so it makes more sense outside of communities that are inherently allied under a common project. I might be quite wrong, though, and would be interested in possible counter-arguments.
On a similar note, my understanding is that Hanson considers medium to large and private companies as the ideal place to kickstart the use of prediction markets, with the idea that eventually, the techniques developed as prediction markets are used and improved in that sphere can also be used for direct public benefit.)
Unless I’m missing something, thin markets and the difficulties of measuring value in sufficiently precise ways should be enough to mostly doom futarchy attempts in EA organizations.
Advisory markets or just frequent betting seems more plausible but still hard.
For example, if we try to do a market on whether or not Lizka should write this post or not, I just don’t (currently) see a way for us to have sufficiently large and sufficiently precise definitions of welfare to make welfare conditional predictions on whether or not Lizka should do this post.
However, I can imagine some betting or a very lightweight prediction market to resolve disagreements on specific interesting proxies(eg “will this post have >50 karma”, “will any work be built on top of this in <3 years”, “will Lizka think this post is a good use of her time 2 months after publication”, “will this be complete by Y date”) , in addition to the project forecasts we currently have.
More generally I’m skeptical that markets are an unusually efficient way to convert information into prices in smaller ecosystems. Like it’s very rare that stuff like internal hiring and conference room bookings within a company is allocated through prices, and the few exceptions I’m aware of do not, to the best of my knowledge, become unusually successful companies.
I basically agree with Linch’s answer, and just want to add that a futarchy-like system (or even, likely, coherent use of prediction markets) would require a lot of management/organizational support (in addition to subsidization, probably, to push back against thin markets), and management/operations already seems like a current bottleneck in EA.
(I’m also unconvinced that EA is the best place to kickstart something like using prediction markets, since people in EA are presumably already incentivized to make decisions that are likely to produce good outcomes and to share information they feel is relevant to those decisions. The strength of futarchy is (in theory) channeling private monetary/profit incentives towards common values or a kind of communal good, so it makes more sense outside of communities that are inherently allied under a common project. I might be quite wrong, though, and would be interested in possible counter-arguments.
On a similar note, my understanding is that Hanson considers medium to large and private companies as the ideal place to kickstart the use of prediction markets, with the idea that eventually, the techniques developed as prediction markets are used and improved in that sphere can also be used for direct public benefit.)