Quest: see the inside of an active bunker
Seems like a good idea if it were easy
(Sorry, when I said your story for impact was “plausible”, in my head I was comparing it to my own idea for why this would be good, and I meant that it was plausibly better than my story. I actually buy your pitch as written, seems like a solidly good thing; apologies)
What a cool project! I listen to the vast majority of my reading these days and am perpetually out of good things to read.
The linked audio is reasonably high quality, and more importantly, it doesn’t have some of the formatting artifacts that other TTS programs have. Well done.
Your story for why this is a potentially high impact project is plausible to me, especially given how much you’ve automated. I have independently been thinking about building something similar, but with a very different story for why it could be worth my time to do it. That means this could be a different story for why your thing was good :), which I thought I’d share
My story was that the top-performing people in a given cause area are large fraction of the valuable work, if you buy power law type arguments. By definition, their time is a lot more valuable than average. But it is also more valuable for them to be better informed, because the changes they make to their decisions by being better informed are leveraged by their high work output or its consequences.
If you buy this story, I think you wind up focusing on figuring out how to cater what is audio-fied to what would be useful to the most productive people in EA. So like, what do top AI safety researchers wish they had time to listen to. I’d bet that this is actually a very different set of things than Forum/ LW posts.
When I started to do my thing, I suspected that a lot of the researchers who are doing the best work would benefit from being able to hear more academic papers, from arxiv for example. But IMO the key problem is that these don’t get read well because of formatting issues. I think this is a solvable problem, and have a few leads, but it was too annoying for me to do as a side project. DM me if you’re interested in chatting about that
Side point: this view of why this is high impact also speaks to letting the top people in question choose what they listen to, which looks more like an app that does TTS on demand than a podcast feed. This happens to avoid copyright issues, if the existence of other TTS apps is any indication.
You might be able to hack together an equivalent solution (on both copyright and customization) without needing to develop your own app by having a simple website that lets people log in and makes them a private RSS feed (compatible with most podcast players I think, though not confident in any of this). Then if they input a link on the website its compiled and added to their RSS feed for use in the player. If you had an api for calling your TTS script (and had solved these formatting issues) I or someone else could probably hack something like this website together pretty fast
And there are various things one could probably do to make it not illegal but still messed up and the wrong thing to do! Like make it mandatory to check a box saying you waive your copyright for audio on a thing before you post on the forum. I think if, like some of the tech companies, you made this box really little and hard to find, most people would not change their posting behavior very much, and would now be totally legal (by assumption).
but it would still be a bad thing to do.
This is a reason to fix the system! My point is that it reduces to “make all the authors happy with how you are doing things”, there is not some spooky extra thing having to do with illegality
TBC I do not endorse using people’s content in a way they aren’t happy with, but I would still have that same belief if it wasn’t illegal at all to do so.
I use speechify, its voices are quite good but has the same formatting issues as all the rest (reading junk text) which I think is the real bottleneck here
FWIW I think I endorse Kat’s reasoning here. I don’t think it matters if it is illegal if I’m correct in suspecting that the only people who could bring a copyright claim are the authors, and assuming the authors are happy with the system being used. This is analogous to the way it is illegal, by violating minimum wage laws, to do work for your own company without paying yourself, but the only person who has standing to sue you is AFAIK yourself.
Not a lawyer, not claiming to know the legal details of these cases, but I think this standing thing is real and an appropriate way to handle
Many longtermist questions related to dangers from emerging tech can be reduced to “what interventions would cause technology X to be deployed before/ N years earlier than/ instead of technology Y”.
In, biosecurity, my focus area, an example of this would be something like “how can we cause DNA synthesis screening to be deployed before desktop synthesizers are widespread?”
It seems a bit cheap to say that AI safety boils down to causing an aligned AGI before an unaligned, but it kind of basically does, and I suspect that as more of the open questions get worked out in AI strategy/ policy/ deployment there will end up being at least some examples of well defined subproblems like the above.
Bostrom calls this differential technology development. I personally prefer “deliberate technology development”, but call it DTD and whatever. My point is, it seems really useful to have general principles for how to approach problems like this, and I’ve been unable to find much work, either theoretical or empirical, trying to establish such principles. I don’t know exactly what these would look like; most realistically they would be set of heuristics or strategies alongside a definition of when they are applicable.
For example, a shoddy principle I just made up but could vaguely imagine playing out is “when a field is new and has few players, (e.g. small number of startups, small number of labs) causing a player to pursue something else on the margin has a much larger influence on delaying the development of this technology than causing the same proportion of R&D capacity to leave the field at a later point”.
While I expect some theoretical econ type work to be useful here, I started thinking about the empirical side. It seems like you could in principle run experiments where, for some niche areas of commercial technology, you try interventions which are cost effective according to your model to direct the outcome toward a made up goal.
Some more hallucinated examples:
make the majority of guitar picks purple
make the automatic sinks in all public restrooms in South Dakota stay on for twice as long as the current ones
stop CAPTCHAs from ever asking anyone to identify a boat
stop some specific niche supplement from being sold in gelatin capsules anywhere in California
The pattern: specific change toward something which is either market neutral or somewhat bad according to the market, in an area few enough people care about/ the market is small and straightforward such that we should expect it is possible to occasionally succeed.
I’m not sure that there is anything which is a niche enough market to be cheap to intervene on while still being at all representative of the real thing. But maybe there is? And I kind of weirdly expect trying random thing stuff like this to actually yield some lessons, at least in implicit know-how for the person who does it.
Anyway, I’m interested in thoughts on the feasibility and utility of something like this, as well as any pointers to previous attempts to do this kind of thing (sort of seems like certain type of economists might be interested in experimenting in this way, but probably way too weird).
I wonder how these compare with fitting a Beta distribution and using one of its statistics? I’m imagining treating each forecast (assuming they are probabilities) as an observation, and maximizing the Beta likelihood. The resulting Beta is your best guess distribution over the forecasted variable.
It would be nice to have an aggregation method which gave you info about the spread of the aggregated forecast, which would be straightforward here.
I’m vulnerable to occasionally losing hours of my most productive time “spinning my wheels”: working on sub-projects I later realize don’t need to exist.
Elon Musk gives the most lucid naming of this problem in the below clip. He has a 5 step process which nails a lot of best practices I’ve heard from others and more. It sounds kind of dull and obvious to write down, but somehow I think staring at the steps will actually help. Its also phrased somewhat specifically to building physical stuff, but I think there is a generic version of each. I’m going to try implementing on my next engineering project.
The explanation is meandering (though with some great examples I recommend listening to!) so I did my best attempt to quickly paraphrase them here:
The Elon Process:
“Make your requirements less dumb. Your requirements are definitely dumb.” Beware especially requirements from smart people because you will question them less.
Delete a part, process step or feature. If you aren’t adding 10% of deleted things back in, you aren’t deleting enough.
Optimize and simplify the remaining components.
Accelerate your cycle time. You can definitely go faster.
One more unsolicited outreach idea while I’m at it: high school career / guidance counselors in the US.
I’m not sure how idiosyncratic this was of my school, but we had this person whose job it was to give advice to older highschool kids about what to do for college and career. Mine’s advice was really bad and I think a number of my friends would have glommed onto 80k type stuff if it was handed to them at this time (when people are telling you to figure out your life all of a sudden). This probably hits the 16yo demographic pretty well.
Could look like adding a bit of entrypoint content geared at pre-college students to 80k, then making some info packets explaining 80k to counselors as a nonprofit career planning resource with handouts for students, and shipping them to every high school in the US or smth (possibly this is also an international thing, IDK).
This is probably not be the best place to post this but I’ve been learning recently about the success of hacking games in finding and training computer security people (https://youtu.be/6vj96QetfTg for a discussion, also this game I got excited about in high school: https://en.m.wikipedia.org/wiki/Cicada_3301).
I think there might be something to an EA/ rationality game. Like something with a save-the-world but realistically plot and game mechanics built around useful skills like Fermi estimation. This is a random gut feeling I’ve had for a while not something well thought through, so could be obviously wrong.
A couple advantages over the typical static content like videos or written intro sequences:
games can be “stickier”
ppl seem to enjoy intricate, complex games even while avoiding complex static media for lack of time; this is true of many high-school aged ppl in my experience
games can tailor different angles into EA material depending on the user’s input
games can both educate and filter for/ identify people who are high aptitude, contra to written content or video
because games can collect info about user behavior, you might have a much richer sense of where people are dropping out to prototype/ AB test on
anecdotally, smart ppl I went to highschool with seemed to have their career aspirations shaped by videogames, primarily toward wanting to do computer science to be game developers. Maybe this could be channelled elsewhere?
A few downsides of games
limited to a particular demographic interested in videogames
a lot of rationality/ EA stuff seems maybe quite hard to gamify?
maybe a game makes EA stuff seem fantastical
maybe a game would degrade nuance/ epistemics of content
maybe games are quite expensive to make for what they are?
I have zero expertise or qualifications except occasionally playing games, but feel free to DM me anyway if you are interested in this :)
I appreciate the answers so far!
One thing I realized I’m curious about in asking this is something about how many groups of people/ governing bodies are actually crazy enough to use nuclear weapons even if self-annihilation is assured. This seems like an interesting last check against horrible mutual destruction stuff. The hypothesis to invalidate is: maybe the types of people assembled into the groups we call “governments” are very unlikely to carry an “activate mutual destruction” decision all the way through. To be clear, I don’t believe this, and I think there is good evidence that individuals will do this, but I feel sufficiently confused about the gov dynamic to ask.
Of all the national regimes and regional ruling factions since 1950, how many would have used nukes even if they new an adversary would retaliate with overwhelming force? Have there been any real situations where non-great power govts were pushed so far as to resort to nuclear (enemy + self) destruction?
For example, my extremely amateur read makes it seem like Israel was at least somewhat close to nuclear in the Yom Kippur War. And I’d guess that some of the more insane genocide-y civil war factions like the Khmer Rouge wouldn’t have been that concerned about the self-destruction bit, though I don’t know enough history to say for sure, or if they were ever pushed to a breaking point.
I’m familiar with all the standard US-Russia examples of this (I think) and when I put my skeptic hat on/ try to steelman it seems like its hard to know how many additional “filters” would need to be cleared before actual launch. I’d be interested in cases where something of the form “and then [the gov’t or civil war faction or w/e] took some action which they indisputably believed at the time would lead to a large scale tragedy, destroy themselves and all their loved ones, etc”. Cases where the group definitely believed they slapped “defect” in the mutually assured destruction game (at least on some scale). Maybe none exist outside of cults and terrorist groups? Though some of those group might be more govt-like than others.
Great set of links, appreciate it. Was especially excited to see lukeprog’s review and the author’s presentation of Atomic Obsession.
I’m inclined toward answers of the form “seems like they would have been used more or some civilizational factor would need to change” (which is how I interpret Jackson’s answer on strong global policing). Which is why I’m currently most interested in understanding the Atomic Obsession-style skeptical take.
If anyone is interested, the following are some of the author’s claims which seem pertinent, at least as far as I can tell (from the author’s summary, a couple reviews, and a few chapters but not the whole book):
Nuclear weapons are not cost effective for practical military purposes or terrorists.
Many people have been alarmists about nuclear weapons, in describing their destructive powers and forecasting future developments.
Nuclear weapons have not played a major role as deterrents nor in shifting diplomatic dominance.
It seems like the first two are pretty straightforwardly true. (3) is most interesting, and I haven’t been able to make Mueller’s argument crisp for myself on this point. My attempt at breaking down (3), with some of my own attempt at steelmanning:
a) Nuclear weapons are really expensive
b) Gaining nuclear weapons upsets your neighbors, which is an additional cost
c) There are cheaper ways of getting a more compelling deterrent, for example North Korea could invest in artillery to put more pressure on Seoul.
d) Countries didn’t really have any interest in going to war, anyway, so deterrents were not needed (I think he claims something about Stalin and other communist powers having no interest in war with western powers)
e) Nukes are technically complex and even if smaller actors, possibly including e.g. factions in a civil war, were to steal them, they would have a hard time using them
f) Nukes are easy to police because nuclear forensics are quite good at attributing events to their creators
g) People have to be really crazy to use nuclear weapons given they aren’t very effective on military targets and can’t actually help you win, only suicide
(It seems worth mentioning that in my actual cursory read of Mueller’s arguments in the form mentioned above, I found some points I’ve omitted because they seem mutually inconsistent and make him seem dogmatic to me. For example at one point in his nuclear terrorism section he seems to use the fact that the CIA would probably have infiltrated a group as evidence for the overarching claim that investment in counter-proliferation is wasted. The contradiction is obviously that the CIA probably wouldn’t invest as much in infiltrating terrorist groups attempting to build nukes if that was less of a priority. )
If we take my hypothetical to mean “nuclear weapons are cheaper to build” (sorry for the ambiguity there) then a, b, c and e seem basically null. I read d) as pretty far removed from the facts. Some good evidence for this in the comments of the lukeprog post especially Max Daniel’s.
Which leaves f- Nukes are easy to police, and g- people aren’t crazy enough to actually use them.
Re direct military conflicts between nuclear weapons states: this might not exactly fit the definition of “direct” but I enjoyed skimming the mentions of nuclear weapons in this wikipedia on the yom kippur war, which saw a standoff between Israel (nuclear) and Egypt (not nuclear, but had reportedly been delivered warheads by USSR). There is some mention of Israel “threatening to go nuclear” possibly as a way of forcing the US to intervene with conventional military resources.
Interesting! For (1) how do you expect the economic superpowers to respond to smaller nations using nuclear weapons in this world? It sounds like because of MAD between the large nations, your model is that they must allow small nuclear conflicts, or alternatively pivot into your scenario 2 of increased global policing, is that correct?