Research analyst at Open Philanthropy. All opinions are my own.
Lukas_Finnveden
AGI and Lock-In
Quantifying anthropic effects on the Fermi paradox
It’s very difficult to communicate to someone that you think their life’s work is misguided
Just emphasizing the value of prudence and nuance, I think that this^ is a bad and possibly false way to formulate things. Being the “marginal best thing to work on for most EA people with flexible career capital” is a high bar to scale, that most people are not aiming towards, and work to prevent climate change still seems like a good thing to do if the counterfactual is to do nothing. I’d only be tempted to call work on climate change “misguided” if the person in question believes that the risks from climate change are significantly bigger than they in fact are, and wouldn’t be working on climate change if they knew better. While this is true for a lot of people, I (perhaps naively) think that people who’ve spent their life fighting climate change know a bit more. And indeed, someone who have spent their life fighting climate change probably has career capital that’s pretty specialized towards that, so it might be correct for them to keep working on it.
I’m still happy to inform people (with extreme prudence, as noted) that other causes might be better, but I think that “X is super important, possibly even more important than Y” is a better way to do this than “work on Y is misguided, so maybe you want to check out X instead”.
Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
conflicts of interest in grant allocation, work place appointments should be avoided
Worth flagging: Since there are more men than women in EA, I would expect a greater fraction of EA women than EA men to be in relationships with other EAs. (And trying to think of examples off the top of my head supports that theory.) If this is right, the policy “don’t appoint people for jobs where they will have conflicts of interest” would systematically disadvantage women.
(By contrast, considering who you’re already in a work-relationship with when choosing who to date wouldn’t have a systematic effect like that.)
My inclination here would be to (as much as possible) avoid having partners make grant/job-appointment decisions about their partners. But that if someone seems to be the best for a job/grant (from the perspective of people who aren’t their partner), to not deny them that just because it would put them in a position closer to their partner.
(It’s possible that this is in line with what you meant.)
I actually think the negative exponential gives too little weight to later people, because I’m not certain that late people can’t be influential. But if I had a person from the first 1e-89 of all people who’ve ever lived and a random person from the middle, I’d certainly say that the former was more likely to be one of the most influential people. They’d also be more likely to be one of the least influential people! Their position is just so special!
Maybe my prior would be like 30% to a uniform function, 40% to negative exponentials of various slopes, and 30% to other functions (e.g. the last person who ever lived seems more likely to be the most influential than a random person in the middle.)
Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it’s not super unlikely that early people are the most influential.
The website now lists Helen Toner, but do not list Holden, so it seems he is no longer on the board.
FWIW, I think my median future includes humanity solving AI alignment but messing up reflection/coordination in some way that makes us lose out on most possible value. I think this means that longtermists should think more about reflection/coordination-issues than we’re currently doing. But technical AI alignment seems more tractable than reflection/coordination, so I think it’s probably correct for more total effort to go towards alignment (which is the status quo).
I’m undecided about whether these reflection/coordination-issues are best framed as “AI risk” or not. They’ll certainly interact a lot with AI, but we would face similar problems without AI.
Sweden has a “Ministry of the Future,”
Unfortunately, this is now a thing of the past. It only lasted 2014-2016. (Wikipedia on the minister post: https://en.wikipedia.org/wiki/Minister_for_Strategic_Development_and_Nordic_Cooperation )
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
The future’s ability to affect the past is truly a crucial consideration for those with high discount rates. You may doubt whether such acausal effects are possible, but in expectation, on e.g. an ultra-neartermist view, even a 10^-100 probability that it works is enough, since anything that happened 100 years ago is >>10^1000 times as important as today is, with an 80%/day discount rate.
Indeed, if we take the MEC approach to moral uncertainty, we can see that this possibility of ultra-neartermism + past influence will dominate our actions for any reasonable credences. Perhaps the future can contain 10^40 lives, but that pales in comparison to the >>10^1000 multiplier we can get by potentially influencing the past.
With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?
Actually, I’ll test copying an image from a google doc into this comment: (edit: seems to be working!)
- 5 Jul 2019 18:25 UTC; 4 points) 's comment on I find this forum increasingly difficult to navigate by (
- 6 Nov 2019 16:16 UTC; 2 points) 's comment on Formalizing the cause prioritization framework by (
- 5 Jul 2019 20:32 UTC; 1 point) 's comment on I find this forum increasingly difficult to navigate by (
Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don’t want to hear, but maybe they need.
I don’t think this position is unpopular in the EA community. You have more than one goal and that’s fine got lots of upvotes, and my impression is that there’s a general consensus that breaks are important and that burnout is a real risk (even though people might not always act according to that consensus).
I’d guess that it’s getting downvotes because it doesn’t really explain why we should be less productive: it just stakes out the position. In my opinion, it would have been more useful if it, for example, presented evidence showing that unproductive time is useful for living a fulfilled life, or presented an argument for why living a fulfilled life is important even for your altruistic values (which Jakob does more of in the comments).
Meta meta note: In general, it seems kind of uncooperative to assume that people need more of things they downvote.
Because utility and integrity are wholly independent variables, so there is no reason for us to assume a priori that they will always correlate perfectly. So if we wish to believe that integrity and expected value correlated for SBF, then we must show it. We must actually do the math.
This feels a bit unfair when people (i) have argued that utility and integrity will correlate strongly in practical cases (why use “perfectly” as your bar?), and (ii) that they will do so in ways that will be easy to underestimate if you just “do the math”.
You might think they’re mistaken, but some of the arguments do specifically talk about why the “assume 0 correlation and do the math”-approach works poorly, so if you disagree it’d be nice if you addressed that directly.
Nitpicking:
A property of making directional claims like this is that MacAskill always has 50% confidence in the claim I’m making, since I’m claiming that his best-guess estimate is too high/low.
This isn’t quite right. Conservation of expected evidence means that MacAskill’s current probabilities should match his expectation of the ideal reasoning process. But for probabilities close to 0, this would typically imply that he assigns higher probability to being too high than to being too low. For example: a 3% probability is compatible with 90% probability that the ideal reasoning process would assign probability ~0% and a 10% probability that it would assign 30%. (Related.)
This is especially relevant when the ideal reasoning process is something as competent as 100 people for 1000 years. Those people could make a lot of progress on the important questions (including e.g. themselves working on the relevant research agendas just to predict whether they’ll succeed), so it would be unsurprising for them to end up much closer to 0% or 100% than is justifiable today.
- My take on What We Owe the Future by 1 Sep 2022 18:07 UTC; 351 points) (
- 5 Nov 2022 15:25 UTC; 2 points) 's comment on My take on What We Owe the Future by (
But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.
This feels like a surprisingly generic counterargument, after the (interesting) point about ladder climbing. “This could have opportunity costs” could be written under every piece of advice for how to spend time.
In fact, it applies less to this posts than to most advice on how to spend time, since the OP claimed that the environment caused them to work harder.
(A hidden cost that’s more tied to ladder climbing is Chana’s point that some of this can be at least somewhat zero-sum.)
I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.
If we’re doing things right, it shouldn’t matter whether we’re building earliness into our prior or updating on the basis of earliness.
Let the set H=”the 1e10 (i.e. 10 billion) most influential people who will ever live” and let E=”the 1e11 (i.e. 100 billion) earliest people who will ever live”. Assume that the future will contain 1e100 people. Let X be a randomly sampled person.
For our unconditional prior P(X in H), everyone agrees that uniform probability is appropriate, i.e., P(X in H) = 1e-90. (I.e. we’re not giving up on the self-sampling assumption.)
However, for our belief over P(X in H | X in E), i.e. the probability that a randomly chosen early person is one of the most influential people, some people argue we should utilise an e.g. exponential function where earlier people are more likely to be influential (which could be called a prior over “X in H” based on how early X is). However, it seems like you’re saying that we shouldn’t assess P(X in H | X in E) directly from such a prior, but instead get it from bayesian updates. So lets do that.
P(X in H | X in E) = P(X in E | X in H) * P(X in H) / P(X in E) = P(X in E | X in H) * 1e-90 / 1e-89 = P(X in E | X in H) * 1e-1 = P(X in E | X in H) / 10
So now we’ve switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn’t seem much easier than making a guess about P(X in H | X in E), and it’s not obvious whether our intuitions here would lead us to expect more or less influentialness.
Also, the way that 1e-90 and 1e-89 are both extraordinarily unlikely, but divide out to becoming 1e-1, illustrates Buck’s point:
if you condition on us being at an early time in human history (which is an extremely strong condition, because it has incredibly low prior probability), it’s not that surprising for us to find ourselves at a hingey time.
SSC argued that there was not enough money in politics
To be clear, SSC argued that there was surprisingly little money in politics. The article explicitly says “I don’t want more money in politics”.
Here’s one idea: Automatic or low-effort linking to wiki-tags when writing posts or comments. A few different versions of this:
When you write a comment or post that has contains the exact name of a tag/wiki article, those words automatically link to that tag. (This could potentially be turned on/off in the editor or in your personal prefs.)
The same as the above except it only happens if you do something special to the words, e.g. enclose them in [[double brackets]], surround them by [tag] [/tag], or capitalise correctly. (Magic the gathering forums often have something like this for linking to cards.)
The same as the above, except there’s some helpful search function that helps you find relevant wiki articles. E.g. you type [[ or you click some particular button in the editor, and then a box for searching for tags pops up. (Similar to linking to another page in Roam. This could also be implemented for linking to posts.)
I wanted to see exactly how misleading these were. I found this example of an attack ad, which (after some searching) I think cites this, this, this, and this. As far as I can tell:
The first source says that Salinas “worked for the chemical manufacturers’ trade association for a year”, in the 90s.
The second source says that she was a “lobbyist for powerful public employee unions SEIU Local 503 and AFSCME Council 75 and other left-leaning groups” around 2013-2014. The video uses this as a citation for the slide “Andrea Salinas — Drug Company Lobbyist”.
The third source says that insurers’ drug costs rose by 23% between 2013-2014. (Doesn’t mention Salinas.)
The fourth source is just the total list of contributors to Salina’s campaigns, and the video doesn’t say what company she supposedly lobbied for that gave her money. The best I can find is that this page says she lobbied for Express Scripts in 2014, who is listed as giving her $250.
So my impression is that the situation boils down to: Salinas worked for a year for the chemical manufacturers’ trade association in the 90s, had Express Scripts as 1 out of 11 clients in 2014 (although the video doesn’t say they mean Express Scripts, or provide any citation for the claim that Salinas was a drug lobbyist in 2013/2014), and Express Scripts gave her $250 in 2018. (And presumably enough other donors can be categorised as pharmaceutical to add up to $18k.)
So yeah, very misleading.
(Also, what’s up with companies giving and campaigns accepting such tiny amounts as $250? Surely that’s net-negative for campaigns by enabling accusations like this.)