I’m currently working as a Research Scholar at the Future of Humanity Institute. I’ve previously co-created the application Guesstimate. Opinions are typically my own.
Ozzie Gooen(Ozzie Gooen)
I’m not very familiar with investment options in the UK, but there are of course many investment options in the US. I believe that being a citizen of the US helps a fair bit for some of these options.
My impression is that getting full citizenship of both the US and the UK is generally extremely difficult, I imagine ever changing your mind would be quite a challenge.
One really nice benefit of having both citizenship is that it gives you a lot of flexibility. If either country suddenly becomes much more preferable for some reason or another (imagine some tail risk, like a political disaster of some sort), you have the option of easily going to the other.
You also need account for how the US might treat you if you do renounce citizenship. My impression is that they can be quite unfavorable to those who do this (particularly if they think it’s for tax reasons); both by coming at these people for assets, making it difficult to come back to the US for any reason, or other things.
I would be very hesitant to renounce citizenship of either, until you really do a fair amount of research on the cons of the matter.
https://foreignpolicy.com/2012/05/17/could-eduardo-saverin-be-barred-from-the-u-s-for-life/
I’ve been thinking about this topic recently. One question that comes to mind: How much of Good Judgement do you think is explained by g/IQ? My quick guess is that they are heavily correlated.
My impression is that people with “good judgement” match closely with the people that hedge funds really want to hire as analysts, or who make strong executives of product managers.
(1) The difference between preferences and information seems like a thin line to me. When groups are divided about abortion, for example, which cluster would that fall into?
It feels fairly clear to me that the media facilitates political differences, as I’m not sure how else these could be relayed to the extent they are (direct friends/family is another option, but wouldn’t explain quick and correlated changes in political parties).
(2) The specific issue of prolonged involvement doesn’t seem hard to be believe. People spend lots of time on Youtube. I’ve definitely gotten lots of recommendations to the same clusters of videos. There are only so many clusters out there.All that said, my story above is fairly different from Stuart’s. I think his is more of “these algorithms are a fundamentally new force with novel mechanisms of preference changes”. My claim is that media sources naturally change the preferences of individuals, so of course if algorithms have control in directing people to media sources, this will be influential in preference modification. This is where “preference modification” basically means, “I didn’t used to be an intense anarcho-capitalist, but then I watched a bunch of the videos, and now tie in strongly to the movement”
However, the issue of “how much do news organizations actively optimize preference modification for the purposes of increasing engagement, either intentionally or non intentionally?” is more vague.
There’s a lot of anecdotal evidence that news organizations essentially change user’s preferences. The fundamental story is quite similar. It’s not clear how intentional this is, but there seem to be many cases of people becoming extremized after watching/reading the news (not that I think about it, this seems like a major factor in most of these situations).
I vaguely recall Matt Taibbi complaining about this in the book Hate Inc.Here are a few related links:
https://nymag.com/intelligencer/2019/04/i-gathered-stories-of-people-transformed-by-fox-news.html
https://www.salon.com/2018/11/23/can-we-save-loved-ones-from-fox-news-i-dont-know-if-its-too-late-or-not/
If it turns out that the news channels change preferences, it seems like a small leap to suggest that recommender algorithms that get people onto news programs leads to changing their preferences. Of course, one should have evidence to the magnitude and so on.
I’ve done a bit of thinking on this topic, main post here:
https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental
I’m most excited about fundamental research in the behavioral sciences, just ideally done much better. I think the work of people like Joseph Henrick/David Graeber/Robin Hanson was useful and revealing. It seems to me like right now our general state of understanding is quite poor, so what I imagine as minor improvements in particular areas feel less impactful than just better overall understanding.
This looks really useful, many thanks for the writeup. I’d note that I’ve been using Vanguard for regular investments and found website annoying and the customer support quite bad; there would be long periods where they wouldn’t offer any because things were “too crowded”. I think most people underestimate the value of customer support, in part because it is most valuable in the tail end situations.
Some quick questions:
- Are there any simple ways of making investments in these accounts that offer 2x leverage or more? Are there things here that you’d recommend?
- Do you have an intuition around when one should make a Donor-Advised Fund? If there are no minimums, should you set one up once you hit, say, $5K in donations that won’t be spent a given tax year?
- How easy is it for others to invest in one’s Donor-Advised Fund? Like, would it be really easy to set up your own version of EA Funds?
I think the phrases “Research Institute”, and particular ”...Existential Risk Institute” are a best practice and should be used much more frequently.
Centre for Effective Altuism → Effective Altruism Research Institute (EARI)
Open Philanthropy → Funding Effective Research Institute (FERI)
GiveWell → Shortermist Effective Funding Research Institute (SEFRI)
80,000 Hours → Careers that are Effective Research Institute (CERI)
Charity Entrepreneurship → Charity Entrepreneurship Research Institute (CERI 2)
Rethink Priorities → General Effective Research Institute (GERI)
Center for Human-Compatible Artificial Intelligence → Berkeley University Ai Research Institute (BUARI)
CSER → Cambridge Existential Risk Institute (CERI 3)
LessWrong → Blogging for Existential Risk Institute (BERI 2)
Alignement Forum → Blogging for AI Risk Institute (BARI)
SSC → Scott Alexanders’ Research Institute (SARI)
Maybe, Probabilistically Good?
I think this is a good point. That said, I imagine it’s quite hard to really tell.
Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences.
This is really neat. I think in a better world analysis like this would be done by Goodreads and updated on a regular basis. Hopefully the new API changes won’t make it more difficult to do this sort of work in the future.
I’d also note that the larger goals are to scale in non-human ways. If we have a bunch of examples, we could:
1) Open this up to a prediction-market style setup, with a mix of volunteers and possibly inexpensive hires.
2) As we get samples, some people could use data analysis to make simple algorithms to estimate the value of many more documents.
3) We could later use ML and similar to scale this further.
So even if each item were rather time-costly right now, this might be an important step for later. If we can’t even do this, with a lot of work, that would be a significant blocker.
https://www.lesswrong.com/posts/kMmNdHpQPcnJgnAQF/prediction-augmented-evaluation-systems
From where I’m coming from, having seen bits of many sides of this issue, I think average quality matters more than average quantity.
Traits of mediocre donors (including “good” donors with few resources):
- Don’t hunt for great opportunities
- High amounts of noise/randomness in results
- Be strongly overconfident in some weird ways
- Have poor resolution, meaning they will not be able to choose targets much better than light common sense wisdom
- Difficult, time consuming, and opaque to work with
- Not very easy to understand, or not predictable
If one particular person not liking your for an arbitrary reason (uncorrelated overconfidence) stops you from getting funding, that would be the sign of a mediocre donor.
If we had a bunch of these donors, the chances would go up for some nonprofits. Different nonprofits could be overconfident in different ways, leading to more groups being over or below different bars. Some bad nonprofits would be happy, because the noise could increase their chances of getting funding. But I think this is a pretty mediocre world overall.
Of course, one could argue that a given particular donor base isn’t that good, so more competition is likely to result in better donors. I think competition can be quite healthy and result in improvements in quality. So, more organizations can be good, but for different reasons, and only so much as they result in better quality.
Similar to Jonas, I’d like to see more great donors join the fray, both by joining the existing organizations and helping them, and by making some new large funds.
On the first part:
The main problem that I’m worried about it’s not that the terminology is different (most of these questions use fairly basic terminology so far), but rather that there is no order to all the questions. This means that readers have very little clue what kinds of things are forecasted.Wikidata does a good job of having a semantic structure where if you want any type of fact, you could know where to look. Compare this page of Barack Obama, to a long list of facts, some about Obama, some about Obama and one or two other people, all somewhat randomly written and ordered. See the semantic web or discussion on web ontologies for more on this subject.
I expect that questions will eventually follow a much more semantic structure, and correspondingly, there will be far more questions at some points in the future.
On the second part:
By public dashboards, I mean a rather static webpage that shows one set of questions, but includes the most recent data about them. There’s been a few of these done so far. These are typically optimized for readers, not forecasters.
See:
https://goodjudgment.io/superforecasts/#1464
https://pandemic.metaculus.com/dashboard#/global-epidemiology
These are very different from Metaforecast because they have different features. Metaforecast has thousands of different questions, and allows one to search by them, but it doesn’t show historic data and it doesn’t have curated lists. The dashboards, in comparison, have these features, but are typically limited to a very specific set of questions.
This whole thing is a somewhat tricky issue and one I’m surprised hasn’t been discussed much before, to my knowledge.
But there’s not yet enough data to allow that.
One issue here is that measurement is very tricky, because the questions are all over the place. Different platforms have very different questions of different difficulties. We don’t yet really have metrics that compare forecasts among different sets of questions. I imagine historical data will be very useful, but extra assumptions would be needed.
We’re trying to get at some question-general stat of basically, “expected score (which includes calibration + accuracy) adjusted for question difficulty.”
One question this would be answering is, “If Question A is on two platforms, you should trust the one with more stars”
It’s possible we have different definitions of ok.
I have worked with browser extensions before and found them to be a bit of a pain. You often have to do custom work for Safari, Firefox, and Google Chrome. Browsers change the standards, so you have to maintain them and update them in annoying ways at different times.
Perhaps more important, the process of trying to figure out what text is important text of different webpages, and then finding some semantic similarities to match questions, seems tricky to do well enough to be worthwhile. I can imagine a lot of very hacky approaches that would just be annoying most of the time.
I was thinking of something that would be used by, say, 30 to 300 people who are doing important work.
Thanks! If you have requests for Metaforecast, do let us know!
Introducing Metaforecast: A Forecast Aggregator and Search Tool
Good to hear, and thanks for the thoughts!
Another way we could have phrased things would have been,
”This post was useful in ways X,Y, and Z. If it would have done things A,B, and C it would be been even more useful.”
It’s always possible to have done more. Some of the entries were very extensive. My guess is that you did a pretty good job per unit of time in particular. I’d think of the comments as things to think about for future work.
And again, nice work, and congratulations!
Like Larks, I’m happy that work is being put into this. That said, I find this issue quite frustrating to discuss, because I think a fully honest discussion would take a lot more words than most people would have time for.
This is the sort of statement that has multiple presuppositions that I wouldn’t agree with.
I pay my “fair share” in taxes
There’s such thing as a “fair share”
There is some fairly objective and relevant notion of what one “needs to do”
The phrase is about as alien to me, and as far from my belief system, as an argument saying,
One method of dealing with the argument above would be something like,
“Well, we know that Zordon previously transmitted Zerketeviz, which implies that signature Y12 might be relevant, so actually charity is valid.”
But my preferred answer would be,
”First, I need you to end your belief in this Zordon figure”.
The obvious problem is that this latter point would take a good amount of convincing, but I wanted to put this out there.