Feel free to message me on here.
JackM
Hardcore utilitarians can endorse a norm that says “don’t commit fraud” because they think such a norm will have better consequences than an alternative norm that says “generally don’t commit fraud unless it seems like it could achieve more good than harm”.
The former norm is likely to avoid instances of fraud, which isn’t only good because fraud can lead to bad PR, but also because a society with widespread fraud is unlikely to be a pleasant one.
So I do think hardcore utilitarians can be justified in condemning fraud in the strongest possible terms, although I accept one could debate this point.
I’m quite baffled by the argument that, because giving to charity or changing career can do more good than dietary change, this then means it’s permissible or even advisable to avoid dietary change. Relative values are entirely irrelevant. In my opinion the absolute consequentialist value of being ve*an is still considerable, and it is this absolute value that ultimately matters.
Usually we think of saving one human life, or saving one life from severe suffering, to be an incredibly valuable thing to do—and it is. Why shouldn’t this be the case for farm animals? Going ve*an will impact far more than just one animal’s life anyway—Brian Tomasik estimates that “avoiding eating one chicken or fish roughly translates to one less chicken or fish raised and killed”. It’s also worth noting that over 99% of farm animals in the USA live on factory farms.
There are strong consequentialist reasons for going ve*an other than the direct effects on the animals we eat which are well-covered here. One of the most important in my opinion is that you can influence others to change their diet and generally spread concern for animals and expand our moral circle. We need a society that stops seeing animals as objects to reduce the chances of s-risks, where vast amounts of suffering are locked in. How can we care about digital sentience when we don’t even care about cows?
Strong upvoted. I think this is correct, important and well-argued, and I welcome the call to OP to clarify their views.
This post is directed at OP, but this conclusion should be noted by the EA community as a whole which still prioritises global poverty over all else.
The only caveat I would raise is that we need to retain some focus on global poverty in EA for various instrumental reasons: it can attract more people into the movement, allows us to show concrete wins etc.
The only thing of interest here is what sort of compromise ACE wanted. What CARE said in response is not of immediate interest, and there’s certainly no need to actually share the messages themselves.
Perhaps you can understand why one might come away from this conversation thinking that ACE tried to deplatform the speaker? To me at least it feels hard to interpret “find a compromise” any other way.
This might not be a valid concern, but I wonder as the number of Forum users grows if there will be so many posts that most of them can only be on the front page for a very short amount of time. Most posts would then slip under the radar and get very little attention (at least compared to now). This may put people off engaging—although I guess this would then mean you’d settle at some sort of equilibrium.
Lots of very short posts could exacerbate this concern. Maybe the forum has to adapt as it grows. Various sub-groups, like Reddit has, could help allow more posts get attention from those who are interested in them.
- 30 Mar 2022 16:57 UTC; 4 points) 's comment on EA Forum feature suggestion thread by (
Great stuff. Surprised not to see any comments here.
One thing I’d be interested for someone to look into is why the UN appears to have (unexpectedly) woken up to these concerns to the extent that they have. Could this be in part due to the EA community? An understanding of the relevant factors here might help us increase the amount of attention given to longtermism/existential risk by other important institutions.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.
Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.
Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.
I’m very excited to read this sequence! I have a few thoughts. Not sure how valid or insightful they are but thought I’d put them out there:
On going beyond EVM / risk neutrality
The motivation for investigating alternatives to EVM seems to be that EVM has some counterintuitive implications. I’m interested in the meta question of how much we should be swayed by counterintuitive conclusions when EVM seems to be so well-motivated (e.g. VNM theorem), and the fact that we know we are prone to biases and cognitive difficulties with large numbers.
Would alternatives to EVM also have counterintuitive conclusions? How counterintuitive?
The motivation for incorporating risk aversion also seems driven by our intuition, but it’s worth remembering the problems with rejecting risk neutrality e.g. being risk averse sometimes means choosing an action that is stochastically dominated. Again, what are the problems with the alternatives and how serious are they?
On choosing other cause areas over reducing x-risk
As it stands I struggle to justify GHD work at all on cluelessness grounds. GiveWell-type analyses ignore a lot of foreseeable indirect effects of the interventions e.g. those on non-human animals. It isn’t clear to me that GHD work is net positive. I’d be interested in further work on this important point given how much money is given to GHD interventions in the community.
Not all x-risk is the same. Are there specific classes of x-risk that are pretty robust to issues people have raised to ‘x-risk as a whole’? For example might s-risks—those that deal with outcomes far worse than extinction—be pretty robust? Are certain interventions, such as expanding humanity’s moral circle or boosting economic growth, robustly positive and better than alternatives such as GHD even if the Time of Perils hypothesis isn’t true? I’m genuinely not sure, but I know I don’t feel comfortable lumping all x-risk / all x-risk interventions together in one bucket.
I believe the majority of “neartermist” EAs don’t have a high discount rate. They usually prioritise near-term effects because they don’t think we can tractably influence the far future (i.e. cannot improve the far future in expectation). You might find the 80,000 Hours podcast episode with Alexander Berger interesting.
EDIT: neartermists may also be concerned by longtermist fanatical thinking or may be driven by a certain population axiology e.g. person-affecting view. In the EA movement though high discount rates are virtually unheard of.
I actually don’t relate to much of what you’re saying here.
For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.
I know jellyfish is a fictional example. Can you give a real example of this happening? I’m not sure what you mean by “bam, you are now an EA”. What is the metric for this?
I wrote a post about two years ago arguing that the promotion of philosophy education in schools could be a credible longtermist intervention. I think reception was fairly lukewarm and it is clear that my suggestion has not been adopted as a longtermist priority by the community. Just because there were one or two positive comments and OKish karma doesn’t mean anything—no one has acted on it. It seems to me that it’s a similar story for most new cause suggestions.
Could it be more important to improve human values than to make sure AI is aligned?
Consider the following (which is almost definitely oversimplified):
ALIGNED AI
MISALIGNED AI
HUMANITY GOOD VALUES
UTOPIA
EXTINCTION
HUMANITY NEUTRAL VALUES
NEUTRAL WORLD
EXTINCTION
HUMANITY BAD VALUES
DYSTOPIA
EXTINCTION
For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let’s assume neutral world is equivalent to extinction.
The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.
The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins.
The key takeaway here is that improving values is robustly good whereas aligning AI isn’t—alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn’t end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment).
This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?).
I doubt this is a novel argument, but what do y’all think?
I agree that alignment research would suffer during a pause, but I’ve been wondering recently how much of an issue that is. The key point is that capabilities research would also be paused, so it’s not like AI capabilities would be racing ahead of our knowledge on how to control ever more powerful systems. You’d simply be delaying both capabilities and alignment progress.
You might then ask—what’s the point of a pause if alignment research stops? Isn’t the whole point of a pause to figure out alignment?
I’m not sure that’s the whole point of a pause. A pause can also give us time to figure out optimal governance structures whether it be standards, regulations etc. These structures can be very important in reducing x-risk. Even if the U.S. is the only country to pause that still gives us more time, because the U.S. is currently in the lead.
I realise you make other points against a pause (which I think might be valid), but I would welcome thoughts on the ‘having more time for governance’ point specifically.
I have to say I’m pretty glad you won the lottery as I like the way you’re thinking! I have a few thoughts which I put below. I’m posting here so others can respond, but I will also fill out your survey to provide my details as I would be happy to help further if you are interested in having my assistance!
TLDR: I think LTFF and PPF are the best options, but it’s very hard to say which is the better of the two.
Longview Philanthropy: it’s hard to judge this option without knowing more about their general-purpose fund—I didn’t see anything on this on their website at first glance. With my current knowledge, I would say this option isn’t as good as giving to LTFF. Longview is trying to attract existing philanthropists who may not identify as Effective Altruists, which will to some extent constrain what they can grant to as granting to something too “weird” might put off philanthropists. Meanwhile LTFF isn’t as constrained in this way, so in theory giving to LTFF should be better as LTFF can grant to really great opportunities that Longview would be afraid to. Also, LTFF appears to have more vetting resource than Longview and a very clear funding gap.
Effective Altruism Infrastructure Fund: it seems to me that if your goal is to maximise your positive impact on the long-term future then giving to LTFF would be better. This is simply because EA is wider in scope than longtermism so naturally the Infrastructure Fund will fund things that will be somewhat targeted to ‘global health and wellbeing’ opportunities which don’t have a long-term focus. If you look at LTFF’s Fund Scope you will see that LTFF funds opportunities to directly reduce existential risks, but also opportunities to build infrastructure for people working on longtermist projects and promoting long-term thinking—so LTFF also has a “growth” mindset if that’s what you’re interested in.
Patient Philanthropy Fund: personally I’m super excited about this but it’s very difficult to say which is better out of PPF or LTFF. Founders Pledge’s report is very positive about investing to give, but even they say in their report that “giving to investment-like giving opportunities could be a good alternative to investing to give”. I think that which is better of investment-like giving opportunities or investing to give is very much an open, and difficult, question. You do say that “even if the general idea of investing to give later isn’t the best use of these funds, donating to help get the PPF off the ground could still be”. I agree with this and like your idea of “supporting it with at least a portion of the donor-lottery funds”. How much exactly to give is hard to say.
Invest the money and wait a few years: do you have good reason to believe that you/the EA community will be in a much better position in a few years? Why? If it’s just generally “we learn more over time” then why would ‘in a few years’ be the golden period? If ‘learning over time’ is your motivation, PPF would perhaps be a better option as the fund managers will very carefully think about when this golden period is, as well as probably invest better than CEA.
Pay someone to help me decide: doubtful this would be the best option. LTFF basically does this for free. If you find someone / a team who you think is better than the LTFF grant team then fine, but I’m sceptical you will. LTFF has been doing this for a while which has let them develop a track record, develop processes, learn from mistakes etc. so I would think LTFF is a safer and better option.
So overall my view would be that LTFF and PPF are the best options, but it’s very hard to say which is the better of the two. I like the idea of giving a portion to each—but I don’t really think diversification like this has much philosophical backing so if you do have a hunch one option is better than the other, and won’t be subject to significant diminishing returns, then you may want to just give it all to that option.
One takeaway for EA might be to reduce the proportion of EA funds going towards climate change, although I suspect this is fairly low already.
Otherwise, is this a particularly good time for EA leaders to try to engage with Bezos and give him ideas for high-impact giving opportunities?
Thanks for writing this comment as I think you make some good points and I would like people who disagree with Hypatia to speak up rather than stay silent.
Having said that, I do have a few critical thoughts on your comment.
Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn’t a particularly valuable approach.
I don’t think this was Hypatia’s main issue. Quoting Hypatia directly, they imply the following are the main issues:
The language used in the statement makes it hard to interpret and assess factually
It made bold claims with little evidence
It recommended readers spend time going through resources of questionable value
Someone called Encompass a hate group (which as a side note, it definitely is not). The Anima Executive Director in question liked this comment.
You bring this up a few times in your comment. Personally I give the ED the benefit of the doubt here because the comment in question also said “what does this have to do with helping animals” which is a point the ED makes elsewhere in the thread, so it’s possible that they were agreeing with this part of the comment as opposed to the ‘hate group’ part. I can’t be sure of course, but I highly doubt the ED genuinely agrees that Encompass is a hate group given their other comments in the thread seeming fairly respectful of Encompass including “it’s not really about animal advocacy, it’s about racial injustice and how animal advocates can help with that. That’s admirable of course, I just don’t think it’s relevant to this group”.
This was a red-flag to ACE (and probably should have been to many people), since the ED had both liked some pretty inflammatory / harmful statements, and was speaking on a topic they clearly had both very strong and controversial views on, regarding which they had previously picked fights on.
You seem to imply that others should have withdrawn from the conference too, or at least that they should have considered it? This all gets to the heart of the issue about free speech and cancel culture. Who decides what’s acceptable and what isn’t? When is expressing a different point of view just that vs. “picking a fight”. Is it bad to hold “strong and controversial views”?
People were certainly affected by the ED’s comments, but people are affected by all sorts of comments that we don’t, and probably shouldn’t, cancel people for. People will be affected by your comment, and people will be affected by my comment. When talking about contentious issues, people will be affected. It’s unavoidable unless we shut down debate altogether. You imply that the ED’s actions were beyond the pale, but we need to realise that this is an inherently subjective viewpoint and it’s clearly the case that not everyone agrees. So whilst ACE had the right to withdraw, I’m not sure we can imply that others should have too.
I’m late to this, but I’m surprised that this post doesn’t acknowledge the approach of inverse reinforcement learning (IRL) which Stuart Russell discussed on the 80,000 Hours podcast and which also featured in his book Human Compatible.
I’m no AI expert, but this approach seems to me like it avoids the “as these models become superhuman, humans won’t be able to reliably supervise their outputs” problem, as a superhuman AI using IRL doesn’t have to be supervised, it just observes us and through doing so better understands our values.
I’m generally surprised at the lack of discussion of IRL in the community. When one of the world leaders in AI says a particular approach in AI alignment is our best hope, shouldn’t we listen to them?
Part of me wonders if a better model than the one outlined in this post is for Nonlinear to collaborate with well-established AI research organisations who can advise on the high-impact interventions, for which Nonlinear then proceeds to do the grunt work to turn into a reality.
Even in this alternative model I agree that Nonlinear would probably benefit from someone with in-depth knowledge of AI safety as a full-time employee.
I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?
To clarify I’m certainly not criticising—I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.
I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF?
- 23 Dec 2020 18:30 UTC; 5 points) 's comment on 2020 AI Alignment Literature Review and Charity Comparison by (
Sure, and what is your point?
My current best guess is that WM quite reasonably understood FTX to be a crypto exchange with a legitimate business model earning money from fees—just like the rest of the world also thought. The fact that FTX was making trades with depositor funds was very likely to be a closely kept secret that no one at FTX was likely to disclose to an outsider. Why the hell would they—it’s pretty shady business!
Are you saying WM should have demanded to see proof that FTX’s money was being earned legitimately, even if he didn’t have any reason to believe it might not be? This seems to me like hindsight bias. To give an analogy—have you ever asked an employer of yours for proof that their activities aren’t fraudulent?