My guess is that it can help converting non-EAs into people who have roughly EA-aligned objectives which seems highly valuable ! What I mean is that a simple econ degree is enough to have people who think almost like EAs so I I expect an EA university to be able to do that even better
simeon_c
Decomposing Biological Risks: Harm, Potential, and Strategies
Some more ideas that are related to what you mentioned :
Exploring / Exploiting interventions on growth in developing countries. So for instance, what if we took an entire country and spent about 100$ or more per households (for a small country, that could be feasible) ? We could do direct transfer as GiveDirectly but I’d expect some public goods funding to be worth trying aswell.
Making AI safety prestigious setting up an institute that would hire top researchers for safety aligned research. I’m not a 100% sure but I feel like top AI people often go to Google in big part because they offer great working conditions. If an institute offered these working conditions and that we could hire top junior researchers quite massively to work on prosaic AGI alignment, that could help making AI Safety more prestigious. Maybe such an institute could run seminars, give some rewards in safety or even on the long run host a conference.
Does anyone know where to buy potassium iodide tablets? I can’t find any seller which is not out-of-stock and which works on the internet
Making Impactful Science More Reputable
There are two things that matter in science: reputation and funding. While there is more and more funding available for mission-driven science, we’d be excited to see projects that would try to increase the reputation of impactful science. We think that increasing the reputation of impactful work could over time increase substantially the amount of research done on most things that society care about.Some of the ways we could provide more reputation to impactful research:
Awarding prizes to past and present researchers that have done mostly impactful work.
Organizing “seminars of impact” where the emphasis is put on researchers who have been able to make their research impactful
Communicating and sharing impactful research being done. That could be done in several ways (e.g simply using social media or making short films on some specific mission-driven research projects).
Monitoring Nanotechnologies and APM
Nanotechnologies, and a catastrophic scenario linked to it called “Grey goo”, have received very little attention recently (more information here ), whereas nanotechnologies keep moving forward and that some think that it’s one of the most plausible ways of getting extinct.
We’d be excited that a person or an organization closely monitors the evolution of the field and produces content on how dangerous it is. Knowing whether there are actionable steps that could be taken now or not would be very valuable for both funders and researchers of the longtermist community.
New GPT3 Impressive Capabilities—InstructGPT3 [1/2]
Is GPT3 a Good Rationalist? - InstructGPT3 [2/2]
Hi Lauren! Thanks for the post! Did you come across some literature on civil wars and life satisfaction ? Because I expect the effect of civil wars on the latter to be significant so I’d be curious to know if there were some estimates.
[TO ML RESEARCHERS AND MAYBE TECH EXECUTIVES]
When you look at society’s problems, you can observe that many of our structural problems come from strong optimizers.Companies, to keep growing once they’re big enough, start having questionable practices such as tax evasion, preventing new companies from entering markets, capturing regulators to keep their benefits etc.
Policymakers who are elected are those who are doing false promises, who are ruthless with their adversaries and who are using communication without caring about truth.
Now, even these optimizers that are hard to fight against, are very limited in their capabilities. They’re limited by coordination costs, by their limited ability to forecast or by their limited ability to process relevant information. AI poses the risk to break down these barriers and be able to optimize much more strongly. And thus, the feeling that you may have next to these companies and policymakers, i.e that you can’t stop them even if you the way they’re cheating, will be multiplied tenfold next to smarter AIs.
[TO POLICYMAKERS]
Trying to align very advanced AIs with what we want is a bit like when you try to design a law or a measure to constrain massive companies, such as Google or Amazon, or powerful countries, such as the US or China. You know that when you put a rule in place, they will have enough resources to circumvent it. And you might try as hard as you want, if you didn’t design the AI properly in the first place, you won’t be able to have it make what you want.
Future Design: How To Include Future Generations in Today’s Decision-Making?
I know it’s not trivial to do that but if you included your AGI timelines into consideration for this type of forecast, you’d come up with very different estimates. For that reason, I’d be willing to bet on most estimates
Longtermists Should Work on AI—There is No “AI Neutral” Scenario
Thanks for the comment.
I think it would be true if there were other X-risks. I just think that there is no other literal X-risk. I think that there are huge catastrophic risks. But there’s still a huge difference between killing 99% of people and killing a 100%.
I’d recommend reading (or skimming through) this to have a better sense of how different the 2 are.
I think that in general the sense that it’s cool to work on every risks come precisely from the fact that very few people have thought about every risks and thus people in AI for instance IMO tend to overestimate risks in other areas.
If there were no preferences, at least 95% and probably more around 99%. I think that this should update according to our timelines.
And just to clarify, that includes community building etc. as I mentioned.
Just tell me a story with probabilities of how nuclear war or bioweapons could cause human extinction and you’ll see that when you’ll multiply the probabilities, it will go down to a very low number.
I repeat but I think that you don’t still have a good sense of how difficult it is to kill every humans if the minimal viable population (MVP) is around 1000 as argued in the post linked above.
”knock-on effects”
I think that it’s true but I think that on the first-order, not dying from AGI is the most important thing compared with developing it in like 100 years.
“Firstly, under the standard ITN (Importance Tractability Neglectedness) framework, you only focus on importance. If there are orders of magnitude differences in, let’s say, traceability (seems most important here), then longtermists maybe shouldn’t work on AI.”
I think this makes sense when we’re in the domain of non-existential areas. I think that in practice when you’re confident on existential outcomes and don’t know how to solve them yet, you probably should still focus on it though.
”which probably leads to an overly narrow interprets of what might pose X-Risk. I also think the dismissal of climate change and nuclear war seems to imply that human extinction=X-Risk. This isn’t true (definitionally),”
Not sure what you mean by “this isn’t true (definitionnally”. Do you mean irrecoverable collapse, or do you mean for animals?
“although you may make an argument nuclear war and climate change aren’t X-Risks, that argument is not made here.”
The posts I linked to were meant to have that purpose.
”I am not hear claiming you are wrong, but rather you need stronger evidence to support your conclusions. “
An intuition for why it’s hard to kill everyone till only 1000 persons survive:
- For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs.
So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left.
Same goes for nuclear power. It’s not too hard to kill 90% of people with a nuclear winter but it’s very hard to kill the remaining 10-1-0.1% etc.
I think that if the community was convinced that it was by far the most important thing, we would try harder to find projects and I’m confident there are a bunch of relevant things that can be done.
I think we’re suffering from a argument to moderation fallacy that makes that we underinvest massively in AI safety bc :
1) AI Safety is hard
2) There are other causes that when you think not to deeply about it, seem equally important
The portfolio argument is an abstraction that hides the fact that if something is way more important than something else, you just shouldn’t diversify, that’s precisely why we give to AMF rather than other things without diversifying our portfolio.
“Ai safety is still weird. FTX was originally only vegan, and only then shifted to long term considerations.”
That’s right but then your theory of change in other areas needs to be oriented towards AI safety and that might lead to very difference strategies. For instance you might want to not lose “weirdness points” for other cause areas, or might not want to bring in the same type of profiles.
Thanks David, that’s great !
“The first reason not to pursue the one-country approach from a policy perspective is that non-existential catastrophes seem likely, and investments in disease detection and prevention are a good investment from a immediate policy perspective. Given that, it seems ideal to invest everywhere and have existential threat detection be a benefit that is provided as a consequence of more general safety from biological threats. There are also returns to scale for investments, and capitalizing on them may require a global approach.”
I feel like there are two competing views here that are very well underlined thanks to your comment :
From a global perspective, the optimal policy is probably to put metagenomic sequencing in the key nodes of the travel network so that we’re aware of any pathogen as soon as possible. I feel like it’s roughly what you meant
From a marginalist perspective, given the governance by country it’s probably much easier to cover one country who cares about national security with metagenomic sequencing (e.g the US) than to apply the first strategy.
I expect the limiting factor not to be our own resource allocation but the opportunities to push for the relevant policies at the right moment. If we’re able to do the first strategy, i.e if there’s an opportunity for us to push in favor of a global metagenomic plan that has some chances to work, that’s great ! But if we’re not, we shouldn’t disregard the second strategy (i.e pushing in one single country to have a strong metagenomic sequencing policy being implemented) as a great way to greatly mitigate at least X-risks from GCBRs.
“Second, a key question for whether the proposed “one country” approach is more effective than other approaches is whether we think early detection is more important than post-detection response, and what they dynamics of the spread are. As we saw with COVID-19, once a disease is spreading widely, stopping it is very, very difficult. The earlier the response starts, the more likely it is that a disease can be stopped before spreading nearly universally. The post-detection response, however, can vary significantly between countries, and those most able to detect the thread weren’t the same as those best able to suppress cases—and for this and related reasons, putting our eggs all in one basket, so to speak, seems like a very dangerous approach.”
Yes, I agree with this for GCBRs in general but not for existential ones ! My point is just that conditionally on a very very bad virus and on awareness about this virus, I expect some agents who are aware quite early about it (hence the idea to put metagenomic sequencing in every entry points of a country) to find ways to survive it, either due to governments or due to personal preparation (personal bunkers or this kind of stuff).
I hope I answered your points correctly !
Thanks for the comment !