These are really interesting heuristics that I think are non-obvious (I’ve never heard anything like them before) but clearly useful.
I’m curious what your definition of “crazy” is. Does “crazy” mean low probability of success and high expected value? By extension, does “too crazy” mean it has a higher payoff than almost anything else, but too low a probability of success to be worthwhile? I can certainly think of things EAs are funding that I don’t believe they should be funding, but I don’t know if that’s the same thing as “crazy.”
WRT “crazy”, I mean things that might not pass initial sniff tests (absurdity heuristic), things that are outside or far away in reference class space and thus are hard to reason about via analogy, things that make taboo tradeoffs and are thus bad to talk about publicly for brand reasons, or just plain audaciousness. Maybe there are more cues for thinking about these, haven’t tried to apply tools to it yet.
Crazy to EAs or crazy to general population? If it’s the latter, AI-safety research qualifies. If it’s the former, EAF’s wild animal suffering research might still qualify. If you disagree, tell an example of a crazy idea.
Paying researchers to investigate AI safety and WAS doesn’t seem crazy at all to me given the low cost of exploration. Pilot interventions might qualify as crazy, once identified.
Actually crazy would be funding the person who thinks solar updraft towers (https://en.wikipedia.org/wiki/Solar_updraft_tower) can be built an order of magnitude cheaper than current projections and wants to create some prototypes. (I can’t find a link to his page, I have it somewhere in my notes.) Other moonshots in the same reference class (area has not been explored, has potential for large gains once upfront costs have been paid): energy storage, novel biological models underlying disease(remember when everyone laughed at bacteria?), starting additional research focused group houses with different parameters, radical communication research focused on eliminating the hurdles to remote work.
These are off the top of my head, but the object level examples are less the point than simply that we aren’t putting effort into coming up with stuff like this vs further elaborating on the stuff we’ve already found.
To make it even crazier, buy all the land around the superchimney, build a charter city around the chimney once it starts working, and make a fortune in real estate.
These are really interesting heuristics that I think are non-obvious (I’ve never heard anything like them before) but clearly useful.
I’m curious what your definition of “crazy” is. Does “crazy” mean low probability of success and high expected value? By extension, does “too crazy” mean it has a higher payoff than almost anything else, but too low a probability of success to be worthwhile? I can certainly think of things EAs are funding that I don’t believe they should be funding, but I don’t know if that’s the same thing as “crazy.”
WRT “crazy”, I mean things that might not pass initial sniff tests (absurdity heuristic), things that are outside or far away in reference class space and thus are hard to reason about via analogy, things that make taboo tradeoffs and are thus bad to talk about publicly for brand reasons, or just plain audaciousness. Maybe there are more cues for thinking about these, haven’t tried to apply tools to it yet.
Crazy to EAs or crazy to general population? If it’s the latter, AI-safety research qualifies. If it’s the former, EAF’s wild animal suffering research might still qualify. If you disagree, tell an example of a crazy idea.
Paying researchers to investigate AI safety and WAS doesn’t seem crazy at all to me given the low cost of exploration. Pilot interventions might qualify as crazy, once identified.
Actually crazy would be funding the person who thinks solar updraft towers (https://en.wikipedia.org/wiki/Solar_updraft_tower) can be built an order of magnitude cheaper than current projections and wants to create some prototypes. (I can’t find a link to his page, I have it somewhere in my notes.) Other moonshots in the same reference class (area has not been explored, has potential for large gains once upfront costs have been paid): energy storage, novel biological models underlying disease(remember when everyone laughed at bacteria?), starting additional research focused group houses with different parameters, radical communication research focused on eliminating the hurdles to remote work.
These are off the top of my head, but the object level examples are less the point than simply that we aren’t putting effort into coming up with stuff like this vs further elaborating on the stuff we’ve already found.
Was this the guy you were thinking of? :D
http://www.superchimney.org/ (video)
To make it even crazier, buy all the land around the superchimney, build a charter city around the chimney once it starts working, and make a fortune in real estate.