Research analyst at Open Philanthropy. All opinions are my own.
Lukas Finnveden
AGI and Lock-In
Implications of evidential cooperation in large worlds
Quantifying anthropic effects on the Fermi paradox
Memo on some neglected topics
Non-alignment project ideas for making transformative AI go well
It’s very difficult to communicate to someone that you think their life’s work is misguided
Just emphasizing the value of prudence and nuance, I think that this^ is a bad and possibly false way to formulate things. Being the “marginal best thing to work on for most EA people with flexible career capital” is a high bar to scale, that most people are not aiming towards, and work to prevent climate change still seems like a good thing to do if the counterfactual is to do nothing. I’d only be tempted to call work on climate change “misguided” if the person in question believes that the risks from climate change are significantly bigger than they in fact are, and wouldn’t be working on climate change if they knew better. While this is true for a lot of people, I (perhaps naively) think that people who’ve spent their life fighting climate change know a bit more. And indeed, someone who have spent their life fighting climate change probably has career capital that’s pretty specialized towards that, so it might be correct for them to keep working on it.
I’m still happy to inform people (with extreme prudence, as noted) that other causes might be better, but I think that “X is super important, possibly even more important than Y” is a better way to do this than “work on Y is misguided, so maybe you want to check out X instead”.
Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
conflicts of interest in grant allocation, work place appointments should be avoided
Worth flagging: Since there are more men than women in EA, I would expect a greater fraction of EA women than EA men to be in relationships with other EAs. (And trying to think of examples off the top of my head supports that theory.) If this is right, the policy “don’t appoint people for jobs where they will have conflicts of interest” would systematically disadvantage women.
(By contrast, considering who you’re already in a work-relationship with when choosing who to date wouldn’t have a systematic effect like that.)
My inclination here would be to (as much as possible) avoid having partners make grant/job-appointment decisions about their partners. But that if someone seems to be the best for a job/grant (from the perspective of people who aren’t their partner), to not deny them that just because it would put them in a position closer to their partner.
(It’s possible that this is in line with what you meant.)
Project ideas: Epistemics
Project ideas: Governance during explosive technological growth
Project ideas: Sentience and rights of digital minds
I actually think the negative exponential gives too little weight to later people, because I’m not certain that late people can’t be influential. But if I had a person from the first 1e-89 of all people who’ve ever lived and a random person from the middle, I’d certainly say that the former was more likely to be one of the most influential people. They’d also be more likely to be one of the least influential people! Their position is just so special!
Maybe my prior would be like 30% to a uniform function, 40% to negative exponentials of various slopes, and 30% to other functions (e.g. the last person who ever lived seems more likely to be the most influential than a random person in the middle.)
Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it’s not super unlikely that early people are the most influential.
Project ideas: Backup plans & Cooperative AI
The website now lists Helen Toner, but do not list Holden, so it seems he is no longer on the board.
Sweden has a “Ministry of the Future,”
Unfortunately, this is now a thing of the past. It only lasted 2014-2016. (Wikipedia on the minister post: https://en.wikipedia.org/wiki/Minister_for_Strategic_Development_and_Nordic_Cooperation )
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
FWIW, I think my median future includes humanity solving AI alignment but messing up reflection/coordination in some way that makes us lose out on most possible value. I think this means that longtermists should think more about reflection/coordination-issues than we’re currently doing. But technical AI alignment seems more tractable than reflection/coordination, so I think it’s probably correct for more total effort to go towards alignment (which is the status quo).
I’m undecided about whether these reflection/coordination-issues are best framed as “AI risk” or not. They’ll certainly interact a lot with AI, but we would face similar problems without AI.
The future’s ability to affect the past is truly a crucial consideration for those with high discount rates. You may doubt whether such acausal effects are possible, but in expectation, on e.g. an ultra-neartermist view, even a 10^-100 probability that it works is enough, since anything that happened 100 years ago is >>10^1000 times as important as today is, with an 80%/day discount rate.
Indeed, if we take the MEC approach to moral uncertainty, we can see that this possibility of ultra-neartermism + past influence will dominate our actions for any reasonable credences. Perhaps the future can contain 10^40 lives, but that pales in comparison to the >>10^1000 multiplier we can get by potentially influencing the past.
With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?
Actually, I’ll test copying an image from a google doc into this comment: (edit: seems to be working!)
- 5 Jul 2019 18:25 UTC; 4 points) 's comment on I find this forum increasingly difficult to navigate by (
- 6 Nov 2019 16:16 UTC; 2 points) 's comment on Formalizing the cause prioritization framework by (
- 5 Jul 2019 20:32 UTC; 1 point) 's comment on I find this forum increasingly difficult to navigate by (
I wanted to see exactly how misleading these were. I found this example of an attack ad, which (after some searching) I think cites this, this, this, and this. As far as I can tell:
The first source says that Salinas “worked for the chemical manufacturers’ trade association for a year”, in the 90s.
The second source says that she was a “lobbyist for powerful public employee unions SEIU Local 503 and AFSCME Council 75 and other left-leaning groups” around 2013-2014. The video uses this as a citation for the slide “Andrea Salinas — Drug Company Lobbyist”.
The third source says that insurers’ drug costs rose by 23% between 2013-2014. (Doesn’t mention Salinas.)
The fourth source is just the total list of contributors to Salina’s campaigns, and the video doesn’t say what company she supposedly lobbied for that gave her money. The best I can find is that this page says she lobbied for Express Scripts in 2014, who is listed as giving her $250.
So my impression is that the situation boils down to: Salinas worked for a year for the chemical manufacturers’ trade association in the 90s, had Express Scripts as 1 out of 11 clients in 2014 (although the video doesn’t say they mean Express Scripts, or provide any citation for the claim that Salinas was a drug lobbyist in 2013/2014), and Express Scripts gave her $250 in 2018. (And presumably enough other donors can be categorised as pharmaceutical to add up to $18k.)
So yeah, very misleading.
(Also, what’s up with companies giving and campaigns accepting such tiny amounts as $250? Surely that’s net-negative for campaigns by enabling accusations like this.)