Delaney’s hitting 2% on Predictit for the first time AFAIK. Did your quasi-endorsement move the markets?
This is awesome and I’ve been wanting something like it but am too lazy to create it myself. So I’m really glad kbog did.
I vote for continuing to include weightings for e.g. candidate health. The interesting question is who is actually likely to do the most good, not who believes the best things. So to model that well you need to capture any personal factors that significantly affect their probability of carrying out their agenda.
I think AI safety and biorisk deserve some weighting here even if candidates aren’t addressing them directly. You could use proxy issues that the candidates are more likely to have records on and that relevant experts have a consensus are helpful or unhelpful (e.g. actions likely to lead to an arms race with China). And then adjust for uncertainty by giving them a somewhat lower weight than you would give a direct vote on something like creating an unfriendly AI.
I’d be interested in seeing why they rate malaria so much lower, at least in relative terms, than most of the EA community does. That’s probably a good clue to the differences in methodology, and a shortcut to figuring out whose methods yield more accurate priorities.
P.S. I’m not surprised that measuring the UN development goals is unproductive; a lot of them are obviously distractions. Priorities research only adds value when it’s non-obvious whether something should be a priority. Once it’s clear that the goal is garbage, move on.
Even if your policy views are correct, having friends on the other side of the aisle will do wonders for your predictive abilities, which should influence how you vote in party primaries where electability is at issue. I’m a staunch Democrat currently living in a similarly liberal area but born and raised in a much more conservative area. I was always more bullish on Trump’s odds than my friends here, and every time I hear them say they can’t understand how he got elected or how he’s still popular with the base, I wonder what other easily avoidable mistakes they’re currently making. And there’s no special magic in my improved predictive ability; I just talk to my grandma regularly.
Or let them pay for their own college. The rates they’ll get charged will take your income into account, so they’ll have to get a lot of loans. But the loans are pretty manageable and essentially risk-free as long as income-based repayment remains an option. Now, if they are committed EAs and will be earning to give, this doesn’t accomplish as much since this comes out of their future earnings. But even if there’s a 95% probability that they turn out to be earning to give as EAs, that’s still better than the ~100% odds with respect to you. And the interest rates on those loans are lower than average stock market returns, so the smart financial move is always to get the loans and pay them off as slowly as possible rather than pay higher up front costs (assuming they’re reasonably risk tolerant, as young people should be but especially when the goal is to maximize good done rather than maximize their own comfort) (this is a little more complicated to see when we’re talking about earning to give. but you must be assuming a larger discount rate for the value of donations than what you could earn by investing and giving later; otherwise you’d be doing that instead of giving now. So if giving the money away now is worth sacrificing the ~10% annual gains you could make on it, then it’s even more worth sacrificing ~7%/year in interest payments).
I don’t think life evaluation is the right measurement of happiness for this though. I’m pretty egotistical and more income would definitely make me happier, well beyond the threshold here. I make about 100k/year and would definitely be more satisfied with my life if I made 200k. But, that’s purely from a “feeling like a success”/social status standpoint. I have no realistic lifestyle use for 200k. I can’t even figure out how to use 100k on lifestyle because the things people typically buy are really boring. So even though making more money would make me happier, spending more of it on myself wouldn’t. Positive affect is better but may still be affected too much by life evaluation. So negative affect is probably the best measure, since it’s more related to the possibility of suffering due to insufficient money.
Maybe I should’ve asked you the question I just asked on another post instead: as someone interested in minimizing x-risk, who should I support for President? Or better yet, who has a good compilation of candidates’ records on x-risk-related issues, so I can make my own decision?
As someone with an interest in government and relatively new to the concept of x-risk, I have a semi-urgent question: who should I support for President? I will probably have to get involved with a campaign in some way or another in the next few months to maximize my odds of getting a decent appointment after the election. There’s plenty of interest group ratings, position statements etc. out there on environmental issues but I can’t find much that would be of practical use on the other types, which seem to be more serious at least in aggregate and perhaps individually too. I could try compiling my own ratings but I know far less than a lot of the people in this community, so if someone has already figured out or is in the process of figuring out where the candidates stand on the risks they have expertise in, I would greatly appreciate it. Doesn’t have to be like standard interest group ratings and maybe shouldn’t be. E.g. the fact that someone has a hawkish temperament toward China and that would make them more prone to starting an arms race is probably more important to AI safety than the specifics of any technology-related votes they’ve taken.