This is fantastic, thanks!
First I’ve donated 10 dollars to Ought here (effectively 35):
Make a $10 donation into $35 - EA Forum (effectivealtruism.org)
Given the small amount, I didn’t put much thought in, and hence I don’t want to put detailed reasons here, to avoid spreading inaccurate memes. The very basic reason I chose an organisation working on AI safety was concern for the long-term future of humanity.
Second, I’m planning to do the rest and bulk of my giving through the donor lottery, mostly for the standard reasons found at the link. (One sentence summary: The expected amount donated is the same as if given directly, but if you win, the much higher amount will justify putting more careful thought into your donation.)
Specifically I am giving to the 100k block. That is because at 20k I would probably rather lose than win. The amount would be big enough for it to be important to put in effort and research, but the amount may be too small to really justify delaying any career opportunities. At 100k I’d rather win: At that point I think it would be worth taking some time off to focus on this. That would be super interesting and hopefully help me fine-tune my thinking on some important EA matters. It could also tell me whether I would enjoy being a grant-maker and give me something to show if I decided I did. Given I’d rather win than lose at 100k, 500k is out of the question.
In the past I had donated through the EA funds. I still think that’s probably a decent way of giving and might even end up giving the money there if I won the lottery, though I probably wouldn’t.
Part of the reason I chose the lottery instead is that I think they are closer to the optimum on the spectrum of how big the donation decisions are individuals get to make:Donors in the range of up to a few thousand dollars may not put in enough research to make optimal decisions.At the other extreme, if individuals get to decide over budgets of millions or more, that may skew the total EA portfolio too far towards their idiosyncratic preferences.
I feel there must exist an optimum between these two and that this optimum is probably very roughly in the 100k ballpark. However, I don’t have strong opinions on the exact order of magnitude. It may be that it is in the millions, in which case, the specific argument above against donating to the EA funds vanishes. Of course, things would depend on the specifics as well: maybe spectacularly good allocators should get much larger chunks, though it is probably very hard to tell who that is.
Thanks for your work and thanks for doing this!
In your interview with Patrick Collison, he says the following:
“I think of EA as sort of like a metal detector, and they’ve invented a new kind of metal detector that’s really good at detecting some metals that other detectors are not very good at detecting. But I actually think we need some diversity in the different metallic substances which our detectors are attuned to, and for me EA would not be the only one”
Discussion on the EA forum here, link to the interview here.
First, do you broadly agree with that framework?
Second, given that you likely think that progress studies is one of the most important things to work on, do you think it should worry us that the EA detector did not on its own seem to pick up on progress studies as an opportunity to do good, before it became a more mainstream view? Why didn’t EAs launch this field years ago? Why isn’t it one of the main EA cause areas? Does this hint at a way our detector may be broken? (Note to say that personally I am agnostic for now as to whether this should be a main EA cause area.)
Third, how can we tune the EA metal detector to be more effective at finding new niches where there’s room to do good effectively? I think Patrick is probably right that the EA detector isn’t good enough to pick up on everything that you would want to pick up on. But unlike other detectors, we do have the explicit goal to find all the most important things to do at the margin. So how can we get closer to that goal?
One thing that probably helps me stay motivated is listening to the 80000 hours and the future of life podcasts. It’s lower effort (for me) than e.g. reading essays and hence a good option for periods where I’m especially busy. I don’t know how motivated I’d feel without them, but I suppose hearing from lots of people who try to improve the world can be uplifting and maybe even more personal than blogposts. This might not work if you’re not a ‘podcast person’ though.
Thank you very much for that reply, you have convinced me and I will try all of those things.
Also, I now realise that sharing specific links etc. has the added bonus that it gives you a reason to post many times about EA themed things, instead of just once, so you can hopefully reach more people.
Hello, I think you make a good point, about the necessity to carefully weigh the up- and downsides of each system.
I do not have a strong view on which alternative voting system is best, since I haven’t looked into it deeply enough. Still I want to address this proposition:
Much more is gained by displacing plurality than is lost by replacing it with a suboptimal alternative (for all reasonable alternatives to plurality).
I mostly agree with this position, especially in scenarios where no other option is realistically on the table. However, I do want to point out, that adopting a sub-optimal system can have a considerable cost and that it is not entirely obvious that this cost is irrelevant relative to the gains obtained from switching away from the status quo; in particular, if one believes that the difference in outcomes between two alternative voting systems is big.
For instance, one might assume alternative voting system B to lead to much better results than system A. If this were the case, then switching to A (the weaker system), though (probably) better than the status quo in itself, could still lead to outcomes that are worse than if the switch had not happened. This is for 2 main reasons:
First, as Tobias points out, countries do not change their voting system frequently. Hence this sub-optimal system A might potentially stick around for a century to come, before maybe being changed to the better alternative B. It might be preferable to postpone the switch by a few years, hopefully increasing the odds of switching to B instead of A.
Secondly, this new system A will inevitably be questioned by the electorate and the media. If system A then yields controversial results that are not obviously better than the results one would have got with the status quo system, the whole switch might be viewed as a mistake by the general population. This might even lead to less trust in the political system, though probably only in the short run. Still, a negative experience of this kind, may not only have short-term bad consequences for the country itself in the form of further erosion of trust, but could also discourage other countries from switching away from their respective status quo system for years to come.
Of course, I’m not arguing that switching should be postponed until absolute certainty of one system being better than all others is reached. (That point will probably never come.)
And, of course, I also acknowledge, that the opposite of the described scenario might happen, i.e. that one country switching might encourage others to do so, rather than discourage.
All I’m saying is that there is a case against switching and that therefore, not any system that seems preferable to the status quo ought to automatically be endorsed.