I’m a developer on the EA Forum (the website you are currently on). You can contact me about forum stuff at will.howard@centreforeffectivealtruism.org or about anything else at w.howard256@gmail.com
Will Howard
Donation Election rewards
I think this is valuable research, and a great write up, so I’m curating it.
I think this post is so valuable because having accurate models of what the public currently believe seems very important for AI comms and policy work. For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk), and how few people dismiss it for “Sci-fi” reasons. I have seen fears of “being seen as sci-fi” as a major consideration around AI communications within EA, and so if the public are not (or no longer) put off by this then that would be an important update for people working in AI comms to make.
I also like how clearly the results are presented, with a lot of the key info contained in the first graph.
I see it as “incentives that nudge us to do bad things”, plus this incentive structure being something that naturally emerges or is hard to avoid (“the dictatorless dictatorship”).
I think “Moloch” gets this across a bit better than just “incentives” which could include things like bonuses which are deliberately set up by other people to encourage certain behaviour.
Tyler Cowen has this criticism of prediction markets which is like (paraphrased, plus slightly made up and mixed with my own opinions): “The whole concept is based on people individually trying to maximise their wealth, and this resulting in wealth accruing to the better predictors over time. But then in real life people just bet these token amounts that add up to way less than the money they get from their salary or normal investments. This completely defeats the point! You may as well just take the average probability at that point rather than introducing this overcomplicated mechanism”.
Play money can fix this specific problem, because you can make it so everyone starts with the same amount, whereas real money is constantly streaming in and out for reasons other than your ability to predict esoteric world events. I think this is an underrated property of play money markets, as opposed to the usual arguments about risk aversion. (Of course if you can buy play money with real money this muddies the waters quite a bit)
I would really like to add some kind of limit order mode. I also often set up a limit order to sell out of my position once I have reached a certain profit which I would like to be able to do via the calculator.
The main reason I haven’t done this, and the thing suggested by @Matthew_Barnett below of adding a discount rate, is that I wanted to keep this very simple so that people aren’t overwhelmed by settings. I think the cost of adding an additional setting is quite high because:
A lot of people will be put off and literally just click away if there are too many settings, and then go back to making worse bets than if they only been shown a subset of those settings
People (me) will waste time fiddling with settings that aren’t that important, and either end up making worse bets or just not benefit that much for the extra cost (or think “ugh I have to estimate the expected resolution time in both the YES and NO case” when they see a favourable market, and just not bet on it instead). The discount/expected growth rate is very susceptible to this I think, because it’s easy to be overconfident and avoid ok bets because of the perceived opportunity cost (especially as your growth rate will go down as your balance goes up and it’s harder to find markets that can absorb all your mana, so people are likely to overestimate their long term growth rate)
On the practical side, every extra setting increases the chance of bugs, and being pretty confident that the answer is correct is important for a calculator that makes important decisions for you
My current plan is to leave this calculator basically as is, and built another more fully featured one for advanced users, which will hopefully include these things:
Accounting for several estimates at the same time, and remembering previous bets
Time discounting (which overlaps with the one above)
Limit orders, or some other way of automatically buying in/out of a position over time
Estimating the resolution time in each outcome (this is important if you have a market like “Will Donald Trump tweet before the end of 2023”, where it can resolve YES early but can’t resolve NO early. It changes the ROI quite a bit)
I’m not 100% sure this is the right approach though, because I could throw some of these things in “Advanced settings” pretty easily (within a week or two), whereas building the better thing would take at least a couple of months. I’d be interested in your thoughts on this seeing as you’re an actual real user!
Good point, this is worth considering :)
I don’t think there is any technical reason why the communication with the manifold APIs couldn’t just happen on the frontend, so it might be worth looking into?
I tried to do this initially but it was blocked by Manifold’s CORS policy. I was trying to keep everything in the frontend but this and the call to fetch the authenticated user both require going via a server unfortunately.
Also something else to note in terms of privacy: I log the username and the amount when someone places a bet.
It doesn’t need the API key at all to calculate the recommended amount, so for people concerned about this you can just paste the amount into Manifold
It assumes the market probability is correct for all your other bets, which is an important caveat. This will make it more risk averse than it should be (you can afford to risk more if you expect your net worth to be higher in the future).
It also assumes all the probabilities are uncorrelated, which is another important caveat. This one will make it less risk averse than it should be.
I’m planning on making a version that does take all your estimates into account and rebalances your whole portfolio based on all your probabilities at once (hence mani–folio). This is a lot more complicated though, I decided not to run before I could walk. Also I think the simplicity of the current version is a big benefit, if you are betting over a fairly short time horizon and you don’t have any big correlated positions then the above two things will just be small corrections.
It doesn’t account for that unfortunately, one of the simplifying assumptions it makes is that you will wait for all your positions to resolve rather than selling them.
It directly calculates the amount that will maximise expected log wealth, rather than using a fixed fraction. Basically it simulates the possible outcomes of all the other bets you have open. Then it adds in the new bet you are making and adjusts the size to maximise expected log wealth once all the bets have resolved.
If you have a very diversified portfolio of other bets this will be almost the same as betting the Kelly fraction (the f = p—q/b version) of your net asset value. If you have a risker portfolio, such as one massive bet, then it will be closer to the fraction of your balance. It should always be between these two numbers.
(Manifold also has loans which complicates things, the lower bound is actually on the Kelly fraction of (balance minus loans))
Sorry if it’s confusing that in the post I’m using “the Kelly criterion” to mean maximising expected log wealth, whereas some other places use it to mean literally betting according the formula f = p—q/b. I prefer to use the broader definition because “the Kelly criterion” has a certain ring to it 😌, this is also the definition people on Lesswrong tend to use.
Manifolio: The tool for making Kelly optimal bets on Manifold Markets
The company that builds the UK’s nuclear weapons is hiring for roles related to wargaming
Hi Joseph, we removed that section because we could see that it wasn’t getting many clicks, and we didn’t think the algorithm for selecting the posts was that good[1]. You can still find a page with the remnants of it here though if you want. The list at the bottom there uses the same algorithm as the old Frontpage section, including filtering out things you have already read.
We have started experimenting with better algorithmic recommendations recently, currently these are just at the bottom of posts but we’re thinking of adding them back into the Frontpage. We’re also thinking about adding a “best of” page (possibly algorithmic, possibly not) à la slatestarcodex, so hopefully your wishes will be answered soon. The filter on the All posts page is a good idea too, we’ll consider adding that, it might be tricky to do though for annoying technical reasons.
- ^
Because it would always show the same few high karma posts until the user clicked on them, so if there were none that you wanted to click on then the list would become useless
- ^
I have a basic question about the opportunity cost part: do you track whether charities keep their reserves in investments vs cash? Are charities allowed (legally) to invest all their reserves in the stock market?
It seems like this would make a big difference to the opportunity cost if you are comparing keeping money in OpenPhil (which itself has >4 years of reserves and presumably keeps this in appreciating assets) vs giving it to NTI
Thanks for sharing this, I thought it was great! I’ve curated it, I’ll try to post a longer comment tomorrow elaborating on why.
A complaint about using average Brier scores
Comparing average Brier scores between people only makes sense if they have made predictions on exactly the same questions, because making predictions on more certain questions (such as “will there be a 9.0 earthquake in the next year?”) will tend to give you a much better Brier score than making predictions on more uncertain questions (such as “will this coin come up head or tails?”). This is one of those things that lots of people know but then everyone (including me) keeps using them anyway because it’s a nice simple number to look at.
To explain:
The Brier score for a binary prediction is the squared difference between the predicted probability and the actual outcome . For a given forecast, predicting the correct probability will give you the minimum possible Brier score (which is what you want). But this minimum possible score varies depending on the true probability of the event happening.
For the coin flip the true probability is 0.5, so if you make a perfect prediction you will get a Brier score of 0.25 (). For the earthquake question maybe the correct probability is 0.1, so the best expected Brier score you can get is 0.09 (), and it’s only if you are really badly wrong (you think ) that you can get a score higher than the best score you can get for the coin flip.
So if forecasters have a choice of questions to make predictions on, someone who mainly goes for things that are pretty certain will end up with a (much!) better average Brier score than someone who predicts things that are genuinely more 50⁄50. This also acts as a disincentive for predicting more uncertain things which seems bad.
We’ve just added Fatebook (which is great!) to our slack and I’ve noticed this putting me off making forecasts for things that are highly uncertain. I’m interested in if there is some lore around dealing with this among people who use Metaculus or other platforms where Brier scores are an important metric. I only really use prediction markets, which don’t suffer from this problem.
Note: this also applies to log scores etc
Yes this applies to all requests including /graphql. If the user agent of the request matches a known bot we will return a redirect to the forum-bots site. Some libraries (such as python requests and fetch in javascript) automatically follow redirects so hopefully some things will magically keep working, but this is not guaranteed.
I appreciate that this is annoying, and we didn’t really want to do it. But the site was being taken down by bots (for a few minutes) almost every day a couple of weeks ago so we finally felt this was necessary.
- 6 Oct 2023 17:09 UTC; 2 points) 's comment on Database dumps of the EA Forum by (
Thanks for the suggestion! We’ll add a user setting for this 👍
I’ll be going to this. I just listened to your podcast with Daniel Filan and I thought your point about protests being underrated was a good one