Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn’t reasonably tell someone or myself, ‘you ought to do X over and above Y’.
OscarD
[Question] How Should Free Will Theories Impact Effective Altruism?
Here is the recording, I recommend it:
Food Security: Pests and Diseases Report
Good point, I have fixed it to now refer to cost-benefit ratios. They used a 5% discount rate, though they found similar results under 3% and 10%.
I did not come across any research on the rapid reduction of food losses. Market mechanisms could play a significant role here I imagine, as if the price of food quadrupled after a catastrophe impacting food-production, all actors would be far more motivated to reduce wastage even when it requires extra labour or money. If a food crisis is looming, governments would also increase their focus on maximising production and minimising wastage, which could also bring significant resources to bear on the problem. So I think post-harvest losses would be markedly reduced rapidly. But sadly no quantification or proper research on this that I am aware of.
Thanks for this, I found section 4 in particular useful.
“A life worth living is standardly understood as a life that contains more suffering than happiness.” Not quite!
I am also interested in future internship plans. Specifically, how flexible are the dates and time commitments?
As someone based in Australia, seasonal descriptors (presumably from the Northern hemisphere) aren’t ideal though I can convert them—specific months would be preferable :) Also our university holiday periods are different, so I will need to work around that too.
Thank you. I found this moving. I identify with the quandary of how much and when to share this view of the world of ours with others.
Thanks for this; German politics and governance continues to be (despite flaws) a hopeful example to the Anglosphere! If only more countries were more like Germany.
[Question] Moral trades for tax deductibility
“We are not able to sponsor US employment visas for participants” from https://www.openphilanthropy.org/open-philanthropy-technology-policy-fellowship/
Given this, I assume for people with no connection to the US (not citizens, no green card etc) there is no point in applying?
This seems like an important point to make in the main post as it rules out probably the majority of people opening this post.
The intransitive dice work because we do not care about the margin of victory. In expected value calculations the same trick does not work, so these three lives are all equal, with expected value 7⁄2
Hi, I think I share these intuitions (surveillance is bad) but have a few qualms about your arguments:
Regarding multi-layered defence, I agree it seems best to not solely rely on one protective mechanism. I am unconvinced that having super surveillance will significantly lower other defence mechanisms. (I don’t think people wearing seat belts drive more recklessly?). Also, if we grant that people will be lulled into false sense of security, then I could well imagine malicious actors would likewise assume surveillance is very effective, and think ‘oh well, I won’t try to end the world as I’d just get caught.’ Alternately, if surveillance is more a bluff than something that actually works great, it may still impose significant costs on malicious actors, eg not being able to recruit or communicate over long distances, coordination problems, and generally just slow them down because they are spending resources trying not to be surveilled.
Regarding Hanna’s comment, as you note with CCTV, I think humans are just remarkably adaptable, and while there may be some transition pains, I think growing up in a fully-surveilled society wouldn’t seem that bad or strange. I think because people get used to things, we would also keep being weird and thinking well, as long as the surveillance was indeed very focused on preventing mega-bad things.
I also share Jack’s worry that these somewhat fuzzier concerns about people thinking less independently and being anxious and boring and mainstream do rather pale in comparison to reducing catastrophic risks, at least if one places some credence on more totalising versions of longtermism. Thus, for me I think the key reasons I’m not super bullish on surveillance are that it would be really hard to implement well and globally, as you note, and I agree the totalitarianism risk seems major and plausibly outweighs the gains.
Thanks for this, I agree that it seems valuable to think carefully about the foundations of different research agendas and how justified these are. Indeed, this seems analogous to the traditional EA pursuit of cause prioritisation: thinking carefully about the underlying assumptions and methodologies of different approaches to doing good, and comparing how well justified these are. To stretch the analogy, there may be some alignment equivalents of deworming that seem to have a strong chance of having little value but are still worthwhile in EV terms because of the possibility of having an outsized impact.
While I feel relatively unequipped to do useful direct alignment research (rowing), I feel even more unequipped to do steering. I think this is a general feature of the world rather than just of me, that in order to usefully interrogate the axioms of a research agenda and compare the promisingness of different agendas it is very valuable to be quite familiar with these approaches, especially having already tried rowing in each. For instance in biology, people often start out doing relatively menial lab work to help a senior person’s project, then start directing particular experiments, after several years will run whole research projects, and usually only later in their career will they be well-placed to judge the overall merits of various research agendas. Even though senior researchers are better at pipetting than undergrads, the comparative advantage of the undergrads is to pipette, and of the senior people is to steer and direct.
Likewise in alignment research, it seems most valuable for less experienced people to try rowing within one or more research agendas, and only later try to start their own or compare the value proposition of the different agendas.
I don’t think this disagrees with what you wrote, it just explains why I think I should not be steering (yet).
Regarding the ‘plausible research agendas’ that should be pursued, I generally agree, while noting that even deciding on plausibility isn’t necessarily uncontroversial. Currently, I suppose it is grantmakers that decide this plausibility, which seems alright.
Also, given the large amounts of money available for conducting plausible alignment research, it seems less valuable to steer or think about the relative value of different research agendas, as it is less decision relevant when almost everything will be funded anyway. Though in the future if community-building is very successful and we 10x alignment researchers, prioritisation within alignment would become a lot more important I imagine.
Yes, good point, I now think I was wrong about how important the amount of funding is for steering.
I like the structure and style of this piece, and think it makes sense for this central resource to be more formal and less emotional, and leave the more anecdote-y articles to media pieces which will have a wider audience anyway.
I think “greater significance to the industrial revolution” should be “greater significance than the industrial revolution”
Sounds good! Do you plan to publish the results each month on the forum, or if not what is a good way to get a quick summary of the results each month?
OK great, I’d be keen to bookmark the dashboard to check each month or get an email reminder if you set up a mailing list.
Thanks for this; I have been involved in environmental activism for a while and EA only more recently, and the importance I place on divestment and ESG personal investment choices has decreased accordingly. While I am generally utilitarian-sympathetic, I think I would still struggle to invest in fossil fuel supporting funds at least from virtue ethics and deontological perspectives. As such I use Australian Ethical Investments, which I am happy enough with after a small amount of research, happy to hear suggestions for Australia-based individuals though, but fine if that is outside your scope.