Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist’s curse etc.)?
RandomEA
In the new 80,000 Hours interview of Toby Ord, Arden Koehler asks:
Arden Koehler: So I’m curious about this second stage: the long reflection. It felt, in the book, like this was basically sitting around and doing moral philosophy. Maybe lots of science and other things and calmly figuring out, how can we most flourish in the future? I’m wondering whether it’s more likely to just look like politics? So you might think if we come to have this big general conversation about how the world should be, our most big general public conversation right now is a political conversation that has a lot of problems. People become very tribal and it’s just not an ideal discourse, let’s say. How likely is it do you think that the long reflection will end up looking more like that? And is that okay? What do you think about that?
Ord then gives a lengthy answer, with the following portion the most directly responsive:
Toby Ord: . . . I think that the political discourse these days is very poor and definitely doesn’t live up to the kinds of standards that I loftily suggest it would need to live up to, trying to actually track the truth and to reach a consensus that stands the test of time that’s not just a political battle between people based on the current levels of power today, at the point where they’ll stop fighting, but rather the kind of thing that you expect people in a thousand years to agree with. I think there’s a very high standard and I think that we’d have [to] try very hard to have a good public conversation about it.
With respect to the necessity of a constitutional amendment, I agree with you on presidential elections but respectfully disagree as to congressional elections.
For presidential elections, the proposal with the most traction is the National Popular Vote Interstate Compact, which requires compacting states to give their electoral votes to the presidential ticket with a plurality of votes nationwide but only takes effect after states collectively possessing a majority of all electoral votes join the compact. Proponents argue that it is constitutional (with many believing it can be done without congressional consent), while opponents say that it is unconstitutional and in any case would require congressional consent. See pages 21-30 of this Congressional Research Service report for a summary of the legal issues. Regardless of which side has the better argument, it’s unlikely that an interstate compact would be used to adopt instant runoff voting or approval voting for presidential elections because i) absent a law from Congress, it would be up to non-compacting states whether to switch from plurality voting in their own state (which could mean voters in some states would be limited to choosing one ticket) and ii) it is questionable whether Congress has the power to require non-compacting states to switch (though see pages 16-17 of this article arguing that it does).
As for congressional elections, it’s worth noting that the U.S. Constitution does not require plurality voting and does not even require single member districts. Indeed, ranked choice voting was used in Maine for congressional elections in 2018, and a federal judge rejected the argument that it is unconstitutional due to being contrary to historical practice. And while single member districts have been used uniformly for nearly two centuries, it was not the only method in use at the founding and courts tend to give special weight to founding era practice (see e.g. Evenwel v. Abbott for an example related to elections), which makes me think that FairVote’s single transferable vote proposal is on solid constitutional footing.
The 80,000 Hours career review on UK commercial law finds that “while almost 10% of the Members of Parliament are lawyers, only around 0.6% have any background in high-end commercial law.” I have been unable to find any similar analysis for the US. Do you know of any?
This makes me feel more strongly that there should be a separate career advice organization focused on near term causes. (See here for my original comment proposing this idea.)
A near term career advice organization could do the following:
Write in-depth problem profiles on causes that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective (e.g. U.S. criminal justice reform, developing country mental health, policy approaches to global poverty, food innovation approaches to animal suffering, biomedical research focused on aging)
Write in-depth career reviews of careers that could be considered to be among the highest impact from a near term perspective but that are not considered to be among the highest impact from a long term perspective (e.g. careers that correspond with the problems listed in the previous bullet point, specific options in the global poverty space, specific options in the animal suffering space)
Produce a podcast that focuses on interviewing people working on issues that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective
Become deeply familiar with the global poverty space, the animal suffering space, and other cause areas that are much more likely to be prioritized by near term people and form close connections to organizations working in such cause areas
Provide job postings, career coaching, and referrals based on the information gained through the previous bullet point
I think the proposed organization would actually complement 80,000 Hours by expanding the number of cause areas for which there’s in-depth career advice and coaching; the two organizations could even establish a partnership where they refer people to each other as appropriate.
(As noted in my original comment, I think it’s better to have a separate organization do this since a long-term focused organization understandably wants to focus its efforts on causes that are effective from its perspective.)
This approach could have various benefits including:
directly increasing impact by providing better advice to individual EAs who are unable to contribute to causes that are considered to be among the most pressing from a long term perspective
benefiting the long-term space by keeping individuals who have the potential to contribute to the long term space involved with EA while they gain more skills and experience
benefiting the long-term space by increasing the number of people who are able to benefit from EA career advice and thus the number of people who will refer others to 80,000 Hours (directly or through this proposed organization)
benefiting the long-term space through the various benefits of worldview diversification (learning from feedback loops, community image, option value)
benefiting individual EAs by helping them find a more fulfilling career (their utility counts too!)
Relevant literature:
Is It Better to Blog or Formally Publish? by Brian Tomasik
Why I Usually Don’t Formally Publish My Writings by Brian Tomasik
Would it be possible to introduce a coauthoring feature? Doing so would allow both authors to be notified of new comments. The karma could be split if there are concerns that people would free ride.
[Criminal Justice Reform Donation Recommendations]
I emailed Chloe Cockburn (the Criminal Justice Reform Program Officer for the Open Philanthropy Project) asking what she would recommend to small donors. She told me she recommends Real Justice PAC. Since contributions of $200 or more to PACs are disclosed to the FEC, I asked her what she would recommend to a donor who wants to stay anonymous (and whether her recommendation would be different for someone who could donate significantly more to a 501(c)(3) than a 501(c)(4) for tax reasons). She told me that she would recommend 501(c)(4)s for all donors because it’s much harder for 501(c)(4)s to raise money and she specifically recommended the following 501(c)(4)s: Color of Change, Texas Organizing Project, New Virginia Majority, Faith in Action, and People’s Action.
I asked for and received her permission to post the above.
(I edited this to add a subject in brackets at the top.)
Do you know if this platform allows participants to go back? (I assumed it did, which is why I thought a separate study would be necessary.)
Do you think that asking the same respondents about 50 years, 100 years, and 500 years caused them to scale their answers so that they would be reasonable in relation to each other? Put another way, do you think you would have gotten significantly different answers if you had asked 395 people about 50 years, 395 people about 100 years, and 395 people about 500 years (c.f. scope insensitivity)?
- Aug 26, 2018, 10:30 PM; 6 points) 's comment on Public Opinion about Existential Risk by (
If you add a tag feature, can you make it so that authors can add tags to posts imported from EA Forum 1.0? I think it’d be great if someone interested in animal suffering could easily see all the EA Forum posts related to animal suffering.
And would you be willing to add a feature that allows you to tag individuals? (For this to work, you’d have to provide notifications in a more prominent way than the current ‘Messages’ system.)
Thank you so much for doing this! Is the total number of reactions just the number of likes and comments or does it also include shares? And if you happen to have more than the top 50 (as you hinted at here), would you be willing to post just the links in a Google doc?
If you do an action that does not look cause impartial (say EA Funds mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.
Do you mean EA Grants? The allocation of EA Funds across cause areas is outside of CEA’s control since there’s a separate fund for each cause area.
Do you know if it’s just a fund for other large donors? It seems unusual to require small donors to send an email in order to donate.
If the fund is open to small donors, I hope CEA will consider mentioning it on the EA Funds website and the GWWC website.
Could you put together a handbook and/or video that could be sent to all trainees or is it critical that there be interaction between the trainer and trainee?
Would it be a good idea to create an EA Fund for U.S. criminal justice? It could potentially be run by the Open Phil program officer for U.S. criminal justice since it seems like a cause area where Open Phil is unlikely to fund everything the program officer thinks should be funded, which makes it more likely that extra funding can be spent effectively.
This could help attract more people into effective altruism. However, that could be bad if you think those people are less likely to fully embrace the ideas of effective altruism and thus would dilute the community.
Do you think the general knowledge of EA that a typical EA has is sufficient to run a SHIC workshop? It seems to me that having local groups and university groups give EA lectures at high schools on career day is potentially both high impact and a way for those groups to do direct work.
Two significant limitations are high rates of respondent attrition and the likely influence of social desirability bias and/or demand effects, as it was likely clear (post-workshop) which were the desired responses.
It seems to me one indication of social desirability bias and/or selective attrition is that there is a nearly half point shift in the average response to “I currently eat less meat than I used to for ethical reasons.” On the other hand, it’s possible students interpreted it as “I currently plan on eating less meat than I used to for ethical reasons.”
What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?
How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)
Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?