I work on AI Governance at Open Philanthropy. Comments here are posted in a personal capacity.
alex lawsen (previously alexrjl)
Amazon Smile
I think this is a valid concern, and certainly don’t think presenting ‘Amazon smile is the sort of thing EAs do’ is particularly useful or accurate. To try to be sightly more clear about why I do think the mention is a useful starting point:
Full EA can be quite a lot to try to introduce to people all at once, even when those people already want to help.
Asking people to carefully consider how they make a specific donation is a gentle way in, at least to ‘soft EA’. (Giving games are another example of this)
Amazon Smile is a specific donation that you can ask people to consider how they make. If they haven’t heard of it before, it’s likely that their net experience of hearing about it and setting it up will be positive (they are getting to donate to a charity with no downside, again rather like a giving game). My hope is that this positive experience will make people more likely to consider where their donations go in future, and/or to respond positively to future things they hear about EA. I’m uncertain about how large the effects in each case will be, but don’t think they will be negative. I am concerned, however about the effect of someone setting up Amazon smile on the total amount that they donate in future, which I think will be negative if you ignore any potential introduction to EA. This means the probability of the exercise being positive depends on how likely you are to be able to use the conversations as a productive starting point.
I’m not sure many EAs will agree with your intuition (If I’m understanding your question correctly) that it’s morally wrong to kill one person to save 10. There are certainly some moral philosophers who do, however. This dilemma is often referred to as the “trolley problem”, and has had plenty of discussion over the years.
You may find this interesting reading, it turns out people’s intuitions about similar problems vary quite a lot based on culture.
Can the EA community copy Teach for America? (Looking for Task Y)
Thank you all for the positive comments and extrememly useful feedback! I’ve edited some subheadings and a summary into the original post, though I’ve (optimistically) left the title so that people who’ve read the post and want coming back to participate in the discussion don’t get lost. I’ve also included John’s question in the list of important question to ask.
Thanks for this. I’ve edited your question into the post. The third bullet point you wrote I actually think captures a lot of why I’m excited about a potential Task Y (or list, like the one aaron posted). If people have the option to do something which both genuinely is good, and seems good to them, and hear that this is actively encouraged by the EA community and enough to be considered a valuable part of it, I think this goes quite a long way towards stopping it seeming so elitist. Having multiple levels of commitment available to people, with good advice about the most effective thing to do given a particular level of commitment, seems to plausibly have lots of potential.
I have price discrimination in my head as a model here, though I realise the analogy is not a perfect one.
I’m a white male, and I view my own comfort in debate spaces merely as a means to reach truth, and welcome attempts to trade the former for the latter. Of course, you may be thinking “that’s easy for you to say, cause you’re a white male!” And there’s no point arguing with that because no one will be convinced of anything. But I’m at least going to say that it is the right philosophy to follow.
Consider the possibility that the philosophy you mention is not as easy for everyone to follow as it is for you. When the entirety of society is built with your comfort in mind, it’s very easy to sacrifice some of it as a means to reach truth, especially as as soon as you leave the debate space you can go back to not thinking about any of the discomfort you experienced during the discussion. You are safe putting all of your emotional and intellectual energy into the debate space, knowing that if the conversation gets too much for you, you can opt to leave at any time and go on with the rest of your day.
If, however, someone lives in a world where every day includes many instances of them being made uncomfortable (even if each instance might seem trivially small to someone who only sees one), which they have no option to switch off, that person cannot safely put all of their emotional and intellectual energy into a debate space which asks them to sacrifice their level of comfort. They don’t have the option of participating in a discussion until it becomes too much, because they have to save energy with which to get through the rest of the day. As a result, they are not able to participate in the debate on an equal footing (if they even choose to at all, given the emotional effort involved).
To be clear, I didn’t make the above point in order to say “you should feel bad because you’re white and male”. I also didn’t make it to say “you should just shut up and defer to the opinions of others here because you’re white and male”. I made it to try to explain why the choice to say “everyone just needs to suck it up and deal with their own discomfort” is not a choice with no downside; it puts the debate on an uneven footing, where people are not able to participate equally. It doesn’t seem a stretch to then say that debates where some people are at an inherent disadvantage from the start are not as self evidently optimal as a truth seeking exercise as it may first seem.
“Even though historically men have been granted more authority than women, influence of feminism and social justice means that in many circumstances this has been mitigated or even reversed. For example, studies like Gornall and Strebulaev (2019) found that blinding evaluators to the race or sex of applicants showed that by default they were biased against white men.”
That is an unreasonably strong conclusion to draw from the study you’ve cited, not least given that even in the abstract of that study the authors make it extremely clear that “[their] experimental design is unable to capture discrimination at later stages [than the initial email/expression of interest stage]”. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3301982
[Edited for tone]
I looked for the study because I was surprised by the strength of the statement it was used to support. When I found the study, I was annoyed to find that actually, it doesn’t come close to supporting a claim of the strength made. This annoyance prompted the tone of the original post, which you have characterised fairly and was a mistake. I’ve now edited this out, because I don’t want it to distract from the claim I am making:
The study does not support the claim that is it is being used to support.
Two of FIRE’s conditions request that victims of sexual assault must face their assailant in order to have any hope of justice. I’m extremely glad that EA organisations violate FIRE’s “safeguards”.
I do have strong feelings about this, but having strong feelings and having given complex issues careful consideration are not mutually exclusive, and the implication otherwise was uncalled for. Having carefully considered the issue, I have concluded the anonymity of sexual assault victims is the most important factor here, I’m not alone in this conclusion. The UK legal system, for example, agrees.
Give that you easily identified that “access all evidence” was the other criterion which risked anonymity, I don’t think it’s too hard to see the connection between them.
The idea of a prize for a spectacular breakthrough in the area of energy seems promising but I remain unconvinced that cold fusion, however repackaged, is the basket to put our eggs in here.
Cheap, high-capacity batteries which could be recharged arbitrarily many times could have as transformative an effect on our energy production and consumption as anew fuel source, by allowing a 100% renewable grid to be feasible, as well as making electric vehicles far more attractive. A breakthrough in high-temperature superconductivity could be similarly transformative.
I think sometimes it’s too easy to get caught up in the excitement of finding a highly neglected idea, and in doing so miss the fact that it may be highly neglected for extremely good reasons.
This is spot on, and thinking about this was what prompted me to originally start thinking about trying to identify a ‘Task Y’. I’m relatively confinced that E2g is a good task y in many situations, but working with students is not one of them.
[Question] What book(s) would you want a gifted teenager to come across?
Nice idea, I’ve filled in your form as a potential mentor. :)
Given the probably existence of several catastrophic “tipping points” in climate change, as well as feedback loops more generally such as melting ice reducing solar reflectivity, it seems likely that averting CO2 emissions in the future is less valuable than doing so today.
To do: Figure out an appropriate discount rate to account for this.
I’ve updated towards earning to give having more of the characteristics of task y than I originally thought, based partly on the discussion on the comments. There are some good volunteering opportunities (for those in London, for example, doing charity analysis for sogive) but I haven’t found anything as scalable yet.
One idea I want to explore more is that of effective activism. The difficulty of assessing outcomes is obvious, but XR, for all it’s flaws, has shown the potential to get huge numbers of people involved.
I agree that lots of structure is needed, and I’m very uncertain on the best structure. I do really like John Behar’s post above about the “personal best” approach though.
I think there’s reason to be cautious with the “highest marginal information comes from studying neglected interventions” line of reasoning, because of the danger of studies not replicating. If we only ever test new ideas, and then suggest funding the new ideas which appear from their first study to have the highest marginal impact, it’s very easy to end up with several false positives being funded even if they don’t work particularly well.
In fact, in some sense the opposite argument could be made; it is possible that the highest marginal information gain will come from research research into a topic which is already receiving lots of funding. Mass deworming is the first example that springs to mind, mostly because there’s such a lack of clarity at the moment, but the marginal impact of finding new evidence about an intervention there’s lots of money in could still be very large.
I guess the rather sad thing is that the biggest impact comes from bad news: if an intervention is currently receiving lots of funding because the research picture looks positive, and a large study fails to replicate, a promising intervention now looks less so. If funding moves towards more promising causes as a result, this is a big positive impact, but it feels like a loss. It certainly feels less like good news than a promising initial study on a new cause area, but I’m not sure it actually results in a smaller impact.