I think ASB’s recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.
Mathieu Putz
[Unendorsed] — An update in favor of trying to make tens of billions of dollars
[April fool’s post] Proposal to assign careers by birthdate
Hi, very cool that you’re doing this, thanks! Here goes my question:
How likely is it that taking psychedelics makes patients weird? Scott Alexander wrote up some anecdotes about many early psychedelicists getting weird as they experimented with these substances. He emphasizes it’s all very speculative and of course subjective. And it probably involved pretty high doses / frequencies. But my very superficial understanding is, that it’s hard to find good studies on this sort of thing, precisely because of the regulatory environment. Is that accurate or am I missing something? So should patients seriously consider the risk that their personality, motivations and view of the world would be significantly altered in a way their current self wouldn’t necessarily endorse? Would that be a bad thing?
I’m not sure what my general take is on this, I think it’s quite plausible that keeping it exclusive is net good, maybe more likely good than not. But I want to add one anecdote of my own which pushes the other way.
Over the last two years, while I was a student, I made two career choices in part (though not only) to gain EA credibility:
I was a group organizer at EA Munich (~2 hours a week)
I did a part-time internship at an EA org (~10 hours a week)
Both of these were fun, but I think it’s unlikely that they were good for my career or impact in ways other than gaining EA credibility. I think one non-trivial reason EA credibility was important to me was that I wanted to keep being admitted to things like EAG (maybe more than I admitted to myself in my explicit reasoning at the time).
Having said that, I think EA credibility has also been important to my career in other ways, notably to receive grants, so it’s not clear that this was bad on net.
It might also be that these were unnecessary or ineffective ways of gaining EA credibility—I don’t know what the admissions team cares about. Regardless, I think it’s an update that this is part of what led me to make choices that I otherwise might not have made (though quite plausibly I would have made them anyway).
Thanks so much for looking after possibly my favorite place on the internet!
Hello, I think you make a good point, about the necessity to carefully weigh the up- and downsides of each system.
I do not have a strong view on which alternative voting system is best, since I haven’t looked into it deeply enough. Still I want to address this proposition:
Much more is gained by displacing plurality than is lost by replacing it with a suboptimal alternative (for all reasonable alternatives to plurality).
I mostly agree with this position, especially in scenarios where no other option is realistically on the table. However, I do want to point out, that adopting a sub-optimal system can have a considerable cost and that it is not entirely obvious that this cost is irrelevant relative to the gains obtained from switching away from the status quo; in particular, if one believes that the difference in outcomes between two alternative voting systems is big.
For instance, one might assume alternative voting system B to lead to much better results than system A. If this were the case, then switching to A (the weaker system), though (probably) better than the status quo in itself, could still lead to outcomes that are worse than if the switch had not happened. This is for 2 main reasons:
First, as Tobias points out, countries do not change their voting system frequently. Hence this sub-optimal system A might potentially stick around for a century to come, before maybe being changed to the better alternative B. It might be preferable to postpone the switch by a few years, hopefully increasing the odds of switching to B instead of A.
Secondly, this new system A will inevitably be questioned by the electorate and the media. If system A then yields controversial results that are not obviously better than the results one would have got with the status quo system, the whole switch might be viewed as a mistake by the general population. This might even lead to less trust in the political system, though probably only in the short run. Still, a negative experience of this kind, may not only have short-term bad consequences for the country itself in the form of further erosion of trust, but could also discourage other countries from switching away from their respective status quo system for years to come.
Of course, I’m not arguing that switching should be postponed until absolute certainty of one system being better than all others is reached. (That point will probably never come.)
And, of course, I also acknowledge, that the opposite of the described scenario might happen, i.e. that one country switching might encourage others to do so, rather than discourage.
All I’m saying is that there is a case against switching and that therefore, not any system that seems preferable to the status quo ought to automatically be endorsed.
Thanks for this! I think it’s good for people to suggest new pitches in general. And this one would certainly allow me to give a much cleaner pitch to non-EA friends than rambling about a handful of premises and what they lead to and why (I should work on my pitching in general!). I think I’ll try this.
I think I would personally have found this pitch slightly less convincing than current EA pitches though. But one problem is that I and almost everyone reading this were selected for liking the standard pitch (though to be fair whatever selection mechanism EA currently has, it seems to be pretty good at attracting smart people and might be worth preserving). Would be interesting to see some experimentation, perhaps some EA group could try this?
[Question] What’s the best machine learning newsletter? How do you keep up to date?
This is so useful! I love this kind of post and will buy many things from this one in particular.
Probably a very naive question, but why can’t you just take a lot of DHA **and** a lot of EPA to get both supplements’ benefits? Especially if your diet means you’re likely deficient in both (which is true of veganism? vegetarianism?).
Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don’t want to dismiss it), I don’t understand from the rest of what you wrote why this doesn’t work? Why is there a trade-off?
Studying stimulants’ and anti-depressants’ long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)
Economic Growth, Effective Altruism
Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you’re not taking it)? If it’s beneficial, what’s the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?
Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medications unless “needed”.
So I’d be interested to see a large-scale, longterm RCT (randomized controlled trial) that investigated these issues. I’m unsure about exactly how to do this. One straightforward example would be having two randomized groups, giving the substance to one of them for X months/years, and seeing whether that group has higher earnings after that period. Ideally, the study participants would perform office jobs, rather than manual labor (since that is where most of the value would come from); perhaps even especially cognitively demanding tasks, such as research or trading. In the case of research, metrics such as the number of published articles or number of citations would likely make more sense than earnings.
One could also check health outcomes, probably incl. mental health. Multiple substances or different dosing-regimes could be tested at once by adding study arms.
Notes:
- One of the reasons I would most care about this might be improving the effectiveness of people working to prevent X-risks, but I’m not sure whether that fits neatly into any of your categories (and whether that’s intentional).
- I’m not at all sure whether this is a good idea, but tried to err on the side of over-including since that seems productive while brainstorming; I haven’t thought about this much.
- It may be that such studies exist and I just don’t know about them (pointers?).
- It may be impossible to get this approved by ethics boards, though hopefully in some country somewhere it could happen?
Can you say more about the 20% per year discount rate for community building?
In particular, is the figure meant to refer to time or money? I.e. does it mean that
you would trade at most 0.8 marginal hours spent on community building in 2024 for 1 marginal hour in 2023?
you would trade at most 0.8 marginal dollars spent on community building in 2024 for 1 marginal dollar spent on community building in 2023?
something else? (possibly not referring to marginal resources?)
(For money a 20% discount rate seems very high to me, barring very short timelines or something similar. It would presumably imply that you think Open Phil should be spending much more on community building until the marginal dollar doesn’t have such high returns anymore?)
If you’re wondering who you might know in Oregon, you can search your Facebook friends by location:
Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see “Friends of Friends”.
I assume that will miss a few, so it’s probably worth also actively thinking about your network, but this is probably a good low-effort first start.
Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here’s a map.
[Question] First vs. last name policies?
EA Hotel / CEEALAR except at EA Hubs
Effective Altruism
CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month’s rent in those cities)
This seems really exciting!
I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.
So I think conditional on thinking this is a good idea at all, this may be an unusually good funding opportunity for smaller earning-to-givers. Unfortunately, the flip-side is that fundraising for this may be somewhat harder than for other EA projects.
Thanks for pointing this out, wasn’t aware of that, sorry for the mistake. I have retracted my comment.
Thanks for pointing this out, wasn’t aware of that, sorry for the mistake. I have retracted my comment.
Great post, thanks for writing it! This framing appears a lot in my thinking and it’s great to see it written up! I think it’s probably healthy to be afraid of missing a big multiplier.
I’d like to slightly push back on this assumption:
First, I agree with other commenters and yourself that it’s important not to overwork / look after your own happiness and wellbeing etc.
Having said that, I do think working harder can often have superlinear returns, especially if done right (otherwise it can have sublinear or negative returns). One way to think about this is that the last year of one’s career is often the most impactful in expectation, since one will have built up seniority and experience. Working harder is effectively a way of “pulling that last year forward a bit” and adding another even higher impact year after it. I.e. a year that is much higher-impact than your average year, hence the superlinearity.
Another way to think about this is intuitively. If Sam Bankman-Fried had only worked 20% as hard, would he have made $4 billion instead of $20 billion? No. He would probably have made much much less. Speed is rewarded in the economy and working hard is one way to be fast.
This makes the multiplier from working harder bigger than you would intuitively expect and possibly more important relative to judgment than you suggest.
(I’m not saying everyone reading this should work harder. Some should, some shouldn’t.)
Edited shortly after posting to add: There’s also a more straightforward reason that the claim “judgment is more important than dedication” is technically true but potentially misleading: one way to get better judgment is investing time into researching thorny issues. That seems to be what Holden Karnofsky has been doing for a decent fraction of his career.