Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
“25 to 35 years before we think most of this risk will occur. That is a long time”
Is it really? Another reason for doing direct work sooner is that if the amount of AI safety work being performed is growing, then by working sooner, you will be able to do a larger fraction of the total.
E.g. if you think that AI risks might arrive in 10 or 50 years, and you think that a lot of AI safety research is going to happen after 20 years, then your relative contribution may be larger if AI arrives in 10 years, making it good to research soon.
Hey Michael, Congrats—it’s the first EA meetup on the site!
Re the facebook page, I can’t see it, possibly because it’s a private event. It might be better to show an EA group that people can join.
Thanks Chris Hallquist for following on from my investigations into startup earnings.
Note that the this article was written over a year ago and $3,400 is on the more optimistic side of Givewell’s more recent estimates
Agreed. If you used cosmopolitanism to mean a citizen of the cosmos, then you would be getting closer to that original definition. I note the following comment on definitions on Wikipedia:
‘Definitions of cosmopolitanism usually begin with the Greek etymology of “citizen of the world”. However, as Appiah points out, “world” in the original sense meant “cosmos” or “universe”, not earth or globe as current use assumes.’
Thanks, it should be fixed now.
My guess is that these links will be fine because the forum software takes any internal links in your posts and makes them ‘relative’. Anyway, Trike will continue to work ironing out site bugs over the weekend and we’ll aim to have things in order by Monday.
This proposal seems similar to Bhutan’s use of Gross National Happiness. I agree that it’s good for governments to use statistical analysis to figure out what is making their citizens (or ideally, the world’s citizens, of course!) better off. Governments already have data on how many citizens are using the disability support pension and have lots of lifestyle data from the census. Anyway, the proposal sounds interesting. Feel free to send me a Google doc with this kind of article to recieve detailed feedback so that we can aim to post it over the coming weeks.
In situations like this, it can be good to put one suggestion in each comment so that other users can upvote each suggestion separately.
I think it’s fair to say that effective altruists don’t discuss “the fact that they’re predominantly utilitarian” very much and that might seem kind of sinister on the surface but I’m not quite sure how they’re supposed to discuss this topic. They could do a mea culpa and apologise for their lack of philosophical diversity but this seems inappropriate. Alternatively, they could analyse utilitarianism in detail, which also seems wrong. What they have done is make a few public statements that in-principle, EA is more inclusive than that, which seems like a good first step. Is there much more that urgently needs to be done?
I partially agree here. The parts that I find easiest to agree with relate to exclusion of none utilitarians. I think it’s important that people who are not utilitarian can enter effective altruist circles and participate in discussions. I think it also might be good for effective altruists to pull back from their utilitarian frame of analysis and take a more global view of how their proposals (e.g. totalitarianism as a reducer of x-risk) might be perceived from a broader value system, if for no reason other than ensuring their research remainsbof wider societal interest. FHI would argue that they already do a lot of this, for example, in his thesis, Nick Beckstead argued that he the importance of the far future goes trough on a variety of moral theories,not just classical utilitarianism. But they have some room to improve.
I find it harder to sympathize with the view that effective altruists are collecting a a certain moral perspective unreflectively. I think most have read some ethics abd metaethics and some have read more than the average philosophy major. So the ‘naive’ and simple view can be held by a sophisticated reader.
My last suggestion is that given that the focus of effective altruism is how to do good, its only natural that its earliest adopters are consequentialist. If one thinks that different value systems converge in a lot of developing world or existential risk-related problems, then it might be appropriate to focus on the ‘how’ questions rather than trying harder to pin down a more precise notion of good. As the movement grows, one hopes that the values of its constituency will broaden.
Peter Singer has written significantly about this. I think he gives it a chapter in The Life You Can Save. Here are some snapshots from comments he’s made online:
“We need to get over our reluctance to speak openly about the good we do. Silent giving will not change a culture that deems it sensible to spend all your money on yourself and your family, rather than to help those in greater need – even though helping others is likely to bring more fulfilment in the long run.”
““Research shows that when people know that others are giving, they are themselves more likely to give. So publicly pledging to give will encourage others to give. This holds true for billionaires and for those of us who aren’t anywhere near that level of wealth. We can all make a difference, and play our part in making the world a better place.”
Sounds bloody brilliant Niel!