54 years old, male. Worked in a wide variety of fields—manufacturing, machine design, software. I do chemistry as a hobby and have pretty extensive knowledge of 19th century history and technique in the area.
EA interests include carbon fixation or more generally fighting climate change, as well as remedying malnutrition in the poorer parts of the world.
bbartlog
We should also remember Vasily Arkhipov who was similarly responsible for averting a nuclear attack in 1962.
The corporate alignment problem does precede the AI alignment problem. In some sense we rather deliberately misaligned them by giving them a single goal, relying on human agency and motivation embedded within the system to keep them from running amok. But as they became more sophisticated and competed with each other this became rather unreliable and we have instead tried to restrain and incentivize them with regulation, which has also not been entirely satisfactory.
Steinbeck was prescient (or just a keen observer):
“It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.”Unfortunately the gap between politically feasible solutions and ones that seem likely to actually be effective is pretty large in this area.
I would include all US patent information. Possibly an AI could filter this to include only ‘important patents’ since it’s a large archive but in any case it’s vital information.
So far as the computers, digital content and software are concerned… this may not remain usable. One critical part of this effort could be designing and building perdurable computer hardware so that the archive could contain one or more computers built to last for a hundred years. But I don’t know how feasible this is—swapping out a few things like fans, electrolytic capacitors, thermal paste etc. that have a known limited lifetime is not too difficult, but if you need to re-engineer an SSD from the ground up to increase MTBF to two centuries… that’s hard. I guess if failures are purely stochastic you can just pump up the redundancy to a fantastic degree.
Broadly speaking I would favor print media for this reason. Worth keeping in mind is that advanced industries have their own complex ontogeny. If population is reduced to the extent you describe, knowledge of post-1950 technology could mostly be useless to them for many generations (except as a guide to using scavenged artifacts). Even building something like a functioning railroad requires an entire small civilization.
I think per forum norms this should be a personal blog post rather than front page material.
There are numerous criticisms I would make of your proposal but one simple one is that this system would favor A) challengers who had never held office and B) people who campaigned on vague, vibe-based platforms.
I think this is an excellent area to focus on—though I am maybe biased in that I favor quality of life interventions over quantity of life interventions (one might say that I find the Repugnant Conclusion especially repugnant).
My main curiosity as regards iodine supplementation specifically is whether it is currently neglected enough to be a good cause area. That it can be dramatically efficient when successful is pretty clear I think, but it’s also an area where many governments do make ongoing efforts (for example, India has a National Iodine Deficiency Disorders Control Programme). Are there private organizations that do good work in filling in the gaps or compensating for the failures in these government programs?
It would seem to me that a philanthropist who is really purely interested in maximizing the impact of altruistic spending would have to be operating in a fairly narrow range of confidence in their ability to shape the future in order for this kind of investing to make sense.
In other words: either I can affect things like AI risk, future culture, and long term outcomes in a way that implies above-market ‘returns’ (in human welfare) to my donation over extended time frames. In which case I should spend what money I’m willing to give to those causes today, investing nothing for future acts of altruism.
Or I have little confidence in my judgment on these future matters, in which case I should help people living today and again likely invest nothing.
Only in some narrow middle ground where I think the ROI on these investments will allow for better effective altruism in the future (though I have no really good idea how to influence it otherwise) would it make sense to put aside money like this.
There are of course other reasons that someone with a great deal of money wouldn’t want to try to spend it all at once. It ’s understood that it’s actually difficult to give away a billion dollars in a way that’s efficient, so donating it over time makes sense as a way to get feedback and avoid diminishing returns in specific areas. But this is a separate concern.
One thing I am cautiously optimistic about (at least as regards long term outcomes) is that I think ‘a few high-profile sub-existential-catastrophe events’ are fairly likely. In particular I think that we will soon have AIs capable of impersonating real humans, both online and on the phone via speech synthesis.
These will be superhuman, or maybe just at the level of an expert human, in terms of things like ‘writing a provocative tweet’ or ‘selling you insurance’ or ‘handling call center tasks’. Or, once the technology is out in the open, ‘scamming your grandma’, ‘convincing someone to pay a bitcoin ransom’, and so on. At that point such AIs seem likely to still be short of being able to generalize to the point of escaping confinement, or being trained to the point where emergent motives would cause them to try to do so. But they would likely be ubiquitous enough that they would attract broad public notice and, quite likely, cause considerable fear. We might not have enough attention directed towards AI safety yet, but I think public consciousness will increase dramatically before all the pieces that would make hard takeoff possible are in place.
I think the first negative example is not particularly good. The outer layer is not related to the inner layer. People have a general expectation that others will be private about any illegal activities. Operating a cocaine dealership is negative, but that’s really a completely separate concern from social issues of transparency and trust.
A possibly better negative example here might be ‘I have an STD and don’t inform sex partners about it’.