May be you could look on translation of main articles into different languages?
turchin
When I learned about x-risks in 2007 I decided to do something about it and the only thing I could do at the moment was to translate main articles by Bostrom and EY into Russian. I spent half a year doing it and it had two results. I improved my English and I learned a lot about the topic so I started to get my own ideas about it which resulted into a book in Russian. So I think it is right thing to do—directly go to your goal.
I think that humans are the only chance to all other animal species to survive their normal way to extinction. Most species exist around 4 million years and all life on Earth will die off in next 1 billion year or earlier because of Sun’s rising luminosity. But humans start to resurrect extinct species and will safe animal life if humanity will be able to colonise the Galaxy. That is why doing good to humans and to preventing x-risks is the best way we could help animals.
Most people will help wild animal if they see it in trouble now. Anyway I think that we could create nanoimplants, which will be able to prevent suffering of wild animals by blocking excessive pain in case of death or injury. But these implants will not change the ways of their ordinary life, so natural life will look like almost the same. I am also would vote for resurrection of all sentient life, starting from humans, but also including animals from most complex one to less complex. Probably future AI could do it.
I think that if you “claim good for doing something” you are not pure altruist. Doing good things is always collaborative efforts. I saw examples how good programs failed than people started to discuss who did most “good”. There is no personal good, and if you expect reward for your good it is just a deal on “goods markets”. Of course, thinking about how many goods I help to create is rising my self-esteem and may help to navigate in the future decisions.
One thing I consider important for altruistic career is that it should be both productive and altruistic itself.
if I find the way for 1 million people just to pay me one dollar each (without them getting anything good), I will get 1 million dollar, which I may use for my altruistic goals, but the price is that one million people will not be able to spend this dollar in their needs. Many of these people have good understanding what is good in their life. And for some of them marginal value of this 1 dollar may be high like one day more survival. My spending of this million will be better, in two cases: if I am cleaver in understanding human needs or if I use effect of concentration of capital.
It is clear that such estimations are subject of many biases. And in result collecting money from people will make more harm than good.
Many financial careers are not productive, but just clever instruments to find hidden ways of taxation of ordinary people.
There are two possible ways of climate change—relatively mild, which means 4 C increase, and runaway catastrophic, in which atmosphere will become hotter for may be 50 C and humanity goes extinct. The second one is much less probable, less that 1 per cent in my estimation. But its consequences will be much more grave, so it total negative utility may be higher than in first scenario.
Are you going to address this issue? I could provide many links on the second scenario, but it may turn the discussion in the wrong way about it validity, while my question is about distribution of negative utility between more and less probable scenarios.
Yes, I will add.
I even have some ad hoc ideas who to do it. 1) Converting oil in eatable fats—German did it after WW2 2) Grow worms inside piece of soil 3) Chlorella 4) Potatoes—if all territory of Russia would be used to grow it, it would feed 30 billion people. 5) Converting celulose into glucose bacteria 6) https://en.wikipedia.org/wiki/Pinus_sibirica—it has eatable nuts and total mass of them is very large, as this tree covers millions of square kilometers in taiga.
But if we stop emissions now GW will probably continue to exist for around 1000 years as I read somewhere, and even could jump because cooling effects of soot will stop.
Global coordination problems also exist, but may be not so annoying. In first case punishment comes for non-cooperation, and in second—for actions, and actions always seems to be more punishable.
Ok, I will add it as risks from geo-ingeneering
Scientific studies and preparation for GE is probably the longest part of GE, and could and should be done in advance, and it should not provoke war. If real necessity of GE appear, all need technologies will be ready.
Thanks for your interest—I would be happy to create a map based on public interest, but I need more input.
The map of intervention in global warming is in this post. )) Please clarify what do you mean.
Another map of x-risks prevention in general also about interventions.
All published maps are linked here: http://immortality-roadmap.com/sample-page/
Some future maps are in different draft stages:
AI as an instrument for life extension
Fermi paradox
Personal identity
Mind improvement
Aging theories
I will think about it
My Identity map is on. http://lesswrong.com/r/discussion/lw/nuc/identity_map/
So, some ideas for further research, that is fields which a person could undertake if he want to make an impact in the field of x-risks. So it is carrier advises. For many of them I don’t have special background or needed personal qualities.
Legal research of international law, including work with UN and governments. Goal: prepare an international law and a panel for x-risks prevention. (Legal education is needed)
Convert all information about x-risks (including my maps) in large wikipedia style database. Some master of communication to attract many contributors and balance their actions is needed.
Create computer model of all global risks, which will be able to calculate their probabilities depending of different assumptions. Evolve this model into world model with elements of AI and connect it to monitoring and control systems.
Large research is safety of bio-risks, which will attract professional biologists.
Promoter, who could attract funding for different research without oversimplufication of risks and overhyping solutions. He may be also a political activist.
I think that in AI safety we are already have too many people, so some work to integrate their results is needed.
Teacher. A professor who will be able to teach a course in x-risks research for student and prepare many new researchers. May be youtube lectures.
Artist, who will be able to attract attention to the topic without sensationalism and bad memes.
One way to have interesting conversations—is to have them on a dinner between public speeches on a conference. The most interesting thing during conferences is informal connection between people during breaks and during evenings. A conference is just a cause to collect right people together and put topic frame. So such conference may help to connect national security people and AI safety people.
But I have feeling from previous conversation is that current wisdom of AI people is that government people are unable to understand their complex problems and also are not players in the game in AI creation. Only hackers and corporations are. I don’t think that it is tight approach.
Thanks, I will immediately update.
Done
If you find and prove right strategy for FAI creation, how you will implement it? Will you send it to all possible AI creators, or will try to build own AI, or ask government to pass it as law?
If you will get credible evidences that AGI will be created by Google in next 5 years, what will you do?
I have created a roadmap of x-risks prevention, and I think that it is complete and logically ordered. I will make longer post about it if i will be able to get enough carma. )) The pdf is here: http://immortality-roadmap.com/globriskeng.pdf