A wayward math-muddler for bizarre designs, artificial intelligence options, and spotting trends no one wanted; articles on Medium as Anthony Repetto
Anthony Repetto
Bio, briefly, bared to all :)
Thank you for your detailed response! I am glad when we explore the design-space; to find a decent idea, we must find the patterns amongst many.
In regards to each specific option you mentioned:
futarchy hopes to create a prediction-market, and reward those who make good predictions, yet it does nothing to reward those who invest in the implementation of beneficial projects, the key difference.
Eliezer is looking for a way to coordinate crowd actions; again, the people who invest time and resources are not rewarded directly and materially for funding those benefits.
pigovian taxes, for negative externalities, are included in my line of reasoning, yet they miss the key point again: material reward for those who invest in beneficial projects.
partisan cost-benefit discussions are commonly among elected officials prior to knowing the truth; I contrast that with verification that policy performed as expected, carried-out by apolitical experts randomly assigned, by metrics agreed upon in legislation, in line with the concepts of principled negotiation (see Bill Ury’s classic “Getting to Yes”). I have no expectation that a legislature should ever determine the worth of each proposal—they’d never finish! And I would be glad to pass a ban on political parties at a Constitutional Convention, though I know that won’t happen. Instead, I expect we will need to start from scratch, in which case a foundation of apolitical metrics is better than none.
quadratic funding is nice, to balance the interests of the majority, yet again it does nothing to directly and materially reward those who invest in the solutions. A hope that the coin functions better, maintaining its value or increasing in value, is only a secondary reward, and that reward is focused in the hands of the largest holders.
So, the key difference I have with all of those mentioned is that I propose an incentive for regular, self-interested investors to pay for benefits. That certainly would not happen in each other method. (And yes, those taxes and dividends would be retroactive, similar to the prizes for social good you mentioned.) Without material rewards for the investors, we will only pull a few tens of billions in philanthropic dollars toward benefits every year, while my goal is to internalize externalities in totality or close to it.
This is a critical distinction, because there are a few hundred trillion dollars sloshing around in businesses and real estate, NOT because the investors like capitalism—rather, they like a high rate of return! :0 Due to the efficiencies available in the space of positive externalities, we can give investors that rate of return, while keeping the lion’s share of the benefits in the public. Investors would look at their distribution of assets and say “oh, I was only holding this real estate as a HEDGE against inflation, I’d rather put my cash into a diversified portfolio of public benefits, because they earn an annual return of greater than 12%!” That’s how we can get tens of trillions thrown toward beneficent work. Just give them enough cash back that they are happy. (...and, that shift in assets would lower the price of urban real estate, once they clear out of it!)
[[It’s also worth noting that the article you gave to dismiss wealth taxes only showed that “European countries had all kinds of weird exemptions, while they didn’t exclude basics like your farm or small business, so some people suffered… they also let the millionaires leave—which they did.” Elizabeth Warren’s wealth tax plan, in contrast, is shown in that same article to address those concerns. Compared to the disincentives from sales tax and income tax, I’d prefer a progressive-rate wealth tax only, with a hefty cut taken from those who leave citizenship behind.]]
I see some form of governance as essential to enforce taxation; and that taxation is the only reliable means to gather the broadly-distributed value of most positive externalities, in order to reward investors. Without that reward-structure, we’re leaving tens of trillions of dollars on the table, when we could be spending that on public benefit. So, what might that governance look like? A smart contract, binding sea-steaders into a loose federation, perhaps? I look forward to any other ideas that come up! It should seek to internalize as much as possible, ideally leaving no externality untouched, so that pricing is an accurate representation of public value, and allocation is efficient. Then, I wouldn’t be so worried about markets ruining planets. :)
Thanks for asking! I am looking for someone who can run a simulation of these ‘humidity traps’, at various scales of the design, to find the minimum size necessary to reliably generate a water spout; then, simulating the open waters of the Gulf and Caribbean with various densities of the humidity traps across the surface. With those two simulations, we can get a reasonable estimate of cost, as well as checking for potential problems in the larger climatic circulation.
I am not looking for funding; I expect it would be impossible to make anything happen without approval and involvement of at least the state governments in impacted areas. And, they should pay for it; I don’t see a need to pull philanthropic dollars away from other areas.
I am not looking for funding; I am looking for collaborators who do oceanic and atmospheric simulations, because I do not do those things. You’d have to ask them how long they normally need to do that, and what amount they would want to be compensated for the task. I am not hiring; I’m seeking collaboration.
“Purposefulness” seems to be close to the meaning-feeling you explained, and though objective states can be described, those descriptions cannot in-and-of-themselves become normative statements (“better”, “should”,...). Does that sound close to a ‘why?’ for the difference between meaning-feeling and objective meaning—that “objective meaning cannot be converted on its own into meaning-feeling” due to the normative/declarative split?
I also see meaning-feeling wrapped in myths and expectation, while objective meaning concentrates on accurate metrics. While the two seemed a division between ‘human/machine’ thinking, we now have machines which are beginning to form vague intuitions and biases, expectations and myths.
Meaning-feeling may be less of an evolutionary necessity, and more an unintentional byproduct of a brain’s attempt to find patterns. That is, because “finding pattern ⇒ usually a reward”, our brains AND machine ones are biased toward pattern-seeking. Spurious correlations are common, as a result, though Darwin often lets them be.
I have a suspicion that even our greatest artificial intelligences will fall into many of our mental traps, because these stumbling blocks are a byproduct of statistical illusions, not evolution or perverseness. An example I keep in mind is Kahneman & Tversky’s research on Israeli pilots. (Their instructors swore that punishment improved performance, though that improvement was really just a regression toward pilots’ mean performance, after a bad day!)
Is this near to what you were saying?
I like both the tag-filter and reciprocity.io, thank you for mentioning those! Perhaps, in a similar vein to both, you could let people place tags on their bio for ”meet in person” & “meet online”, along with their regional tag (“East Bay”, “Orlando”,...) so that the tag-filters function the way reciprocity.io does?
Thank you for bringing emphasis to the s-risks! I focus on suffering and oppression which are likely in the majority of future scenarios. In particular, the one I have not heard anyone else mention is this:
Life-extension will be used for unending torture by dictators, to threated and pressure their population. This will make them more powerful, without increasing the potency of democracies. At the same time, those dictators will become more paranoid and reactive, because their own henchmen keep trying to kill them just to get a promotion. Any in-fighting will be blamed on enemies, to bring the organization together. I expect they will normally resort to sabotage when they attack.
This does not seem like an ‘unlikely’ risk for centuries of suffering—rather, we seem to be barreling towards it with intention! Putin, Xi, Muhammad bin Salman, will all extend their lives as soon as they can, and their next targets will be whoever disobeyed, to ‘make an example of them’. I don’t see how we can realistically prevent these dictators from abusing life-extension for the purpose of immense suffering and the power it gives them.
If anyone would like to discuss this s-risk more, I am normally rebuffed by disbelief, while critical assessment of the actual behavior of these bad actors seems to warrant more attention. Xi harvests organs, and keeps people in detention camps, while Putin & MBS like to capture or poison enemies. I worry that too many of us are quick to assume: “it’ll never happen, even though it would give those leaders such an advantage.”
[[A musing on the topic from 2018, for the general readership: “Long Live the King” on Medium, under Anthony Repetto.]]
Ah, yes! I have found my slice of the forum! If anyone is in these domains who would like to dive further into conversation, potential collaboration, that is key for me. I’ll link to the various articles I’ve written over the years, or at least leave a blurb, on a number of these topics:
Great Power Conflict: 1) China wants Pakistan, and may walk in as ‘peacekeepers’ to subdue the Baluchi people, or protect Pakistan from India. Either way, they have a railroad and port to circumvent India and the Strait of Malacca. 2) Life-extension gives dictators the power to torture forever, which they will use as SOON as they can, to exert influence over their people. Meanwhile, henchmen will want to kill the boss—it’s the only way to get a promotion. Sabotage is likely, and conflict will be blamed on enemies, instigating wider conflicts.
Global Governance & Public Goods: 1) I explore the core constraints I see for any method which might internalize most externalities as much as possible. 2) I see sea-steading as an opportunity for the ‘True Fans’ of each political ideology to form their own government… and so, they cannot blame their failure upon anyone else! We desperately need these experiments in political methodology, to determine which ones actually work. With sea-steading, we will finally have willing volunteers and no restrictions. 3) We can ask each person “Which people do you admire, and why?” Follow-up with those admired people, to see who they admire; step-by-step this way you find the most admirable folks.
Outer Space Governance: I identify a few of the key dynamics of space industry, and the immense value of the planet Mercury, which is likely to become the primary source of contention in the coming centuries.
Voting Reform: 1) I identified an unexplored avenue that may allow fair, non-strategic voting (Gibbard’s Theorem only applies to ranked-list data-types, while latent-space vectors allow more powerful operations in their algorithms). 2) I have pushed for a switch toward a volumetric assessment of the combined concerns and interests of the entire population, updated electronically without waiting for elections, as a mandate for government activity, as distinct from one side being ignored for four years at a time and politicians ignoring the issues concerning us most.
Malevolent Actors & Lie-Detection: IIRC, EEGs are able to spot folks with the dark triad, because the region of their brain responsible for projecting their feeling empathically is simply dysfunctional. I expect that sea-steading will allow communities to explicitly BAN anyone who cannot pass the EEG for empathic responses. Successful reduction in crime and corruption in those places would be the proof motivating adoption in ‘stickier’ countries.
Economic Growth: 1) To subsidize super-abundant computer power, as well as re-establish a single dominant reserve currency after the decline of the petro-dollar, we could push to make the US dollar backed by computer hardware. Each computer chip can be ‘bound’ to a crypto-token, (using Samsung’s patented method for on-chip encryption, which takes advantage of unique defects to make a key that ONLY that computer chip can use) and as those computer chips sit in servers, running the internet & businesses, apps, then the token-owner is paid a dividend from that activity. 2) Wealth experiences a positive feedback, simply by possessing it. I argue for a progressive-rate wealth tax, beginning negative as a rebate, exemptions for home, farm, and first biz, hefty cuts for leaving citizenship. Without such a damper, wealth will always concentrate, until it destroys us.
Scientific Policy: Researchers need a devoted team of supporters who can make videos and publicize the research proposals, as a ‘Kickstarter’, so that scientist can get to work without wasting time writing grants and being rejected.
Lie-Detection: Artificial Intelligence can be used to spot bias in a human judge—the A.I. is trained to imitate that person’s past decisions; then, you can quiz the A.I. to spot bias, because its ‘frozen’ brain doesn’t remember your previous questions, and doesn’t think to lie.
Wild Animal Welfare: 1) I recommend reading “The Unified Neutral Theory of Biodiversity and Biogeography”—because, when we reduce habitat, all those diverse species will still be present, YET they are consistently losing allele diversity until their total species diversity collapses in a cascade. I fear that we have a few centuries before “inbreeding” kills most biomes, and we have no way to find “good” alleles to give them, once they lose the ones they have. Each decade is the loss of millions of years of allele experimentation, with no hope of a quantum computer large enough to simulate nature properly. 2) Terra Preta do Indo is one of the best soil types—and it was made by natives 3,000 years ago. If we invest more into soil science, to recreate this anthropogenic soil, we can build lasting fertility instead of deserts.
Climate Impacts: I recently posted about a possible avenue worth simulating, to get a cost-estimate, for using water spouts (humidity-twisters forming over water) might prevent hurricanes.
I am glad to go into greater detail; I’ll be posting more about each on the EA Forum, bit by bit. Let me know what you’d like me to cover, first!
I can point you to where I did those things...
1] “State the exact problem setting you are addressing,”
- “There is an unexplored domain, for which we have not made any definitive conclusions.” I then hypothesize how we might gain some certainty regarding the enlarged class of voting algorithms, though I am likely wrong! [at the top, in epistemic status]
2] “State Gibbard’s theorem, and”
- Gibbard’s Theorem proved that no voting method can avoid strategic voting (unless it is dictatorial or two-choices only) [In the TL;DR at the top]
- He restricted his analysis to all “strategies which can be expressed as a preference n-tuple”… a ranked list. [Second sentence of the first paragraph under “Gibbard’s Theorem” header, at the beginning of the body of the post]
3] “Show how exactly machine learning has solutions for that problem.”
- This is proof by existence that “An algorithm CAN function properly when fed latent-space vectors, DESPITE that algorithm failing when given ONLY a ranked-list.” So, the claim of Voting Theorists that “if ranked-lists fail, then everything must fail” is categorically false. [The third paragraph of the “Gibbard’s Theorem” section, first sentence]
4] “Rather, it applies for any situation where each participant has to choose from some set of personal actions, and there’s some mechanism that translates every possible combination of actions to a social choice (of one result from a set).”
No, specifically, Gibbard frames all those choices as a particular data-type: “the domain of the function g consisting of all n-tuples...”, (p.589) and he presumed that such a data-type would be expressive enough to find any voting algorithm that would be non-strategic, if any such an algorithm could exist. By restricting himself to that data-type, he missed the proof by existence I mentioned above.
5] “that doesn’t mean it’s the end of the world”
At no point did I claim this was an existential risk—neither is shrimp welfare. I’m not sure what point you’re trying to make with this comment. At the bottom of my post, the section titled “Purpose!” I outline the value statement: “considering that governments’ budgets consume a large fraction of the global economy, there are likely trillions of dollars which could be better-allocated. That’s the value-statement, justifying an earnest effort to find such a voting algorithm OR prove that none can exist in this case, as well.”
I’m not sure why I was able to answer all your questions with only quotes from my post. Did I clump the thoughts into paragraphs in a way that all of them were missed?
Thank you for the critique! I’ll tone-down my emphases—my own impulse would have been to color-code with highlighters and side-bars, but I see that’s not what most people want, here :)
And thank you also for calling more attention to the problem—even if water spouts don’t have the muscle for it, I’m one to keep looking. A terrifying option I left behind, which still might inspire something by way of contrast: inducing the hurricane itself, further east in the Atlantic, and just try to steer it over water.
Vortices hold immense energy, yet they are relatively low-energy to nudge (research on plasmonic vortices relies upon that high ratio of held-energy to nudge-energy). Though, I doubt solutions in that vein would get as much buy-in as water spout tarps might, especially because water spouts would be a redundant system of many identical parts, instead of relying upon a single rudder for a storm.
If you have inspirations, however unusual, I am glad to hear it!
Oh, I only expect that the water spouts could be activated once the sun had accumulated enough over-heated high-humidity air within the tarp-layers… sometime late in the afternoon. Yet, the water spout removes more of the surface humidity than would convect otherwise—this allows further evaporation and cooling of surface waters. If that effect is strong enough, over a large area per water spout, then it might weaken hurricanes when they pass.
I don’t expect the water spouts to carry most of their moisture high into the air, as adiabatic cooling will condense the majority of it quickly. Yet, that plume would still leave moisture high enough for mixing, and it would be hot and humid, pushing higher. If that can increase cloud cover without a thousand airplanes dumping chemicals, that sort of geo-engineering might be an easier pitch to the public & government.
[Note: The key difference between the water spout and natural convection is that a vortex will sustain itself at a higher rate of flow, fueled faster by the thermal gradient. My hope is that this would increase surface evaporation enough to cool waters, weakening the storm. Clouds would be nice, however much we can get; I just expect evaporation to play a larger role.]
Oh, beautiful! Thank you :) There’s so much depth of water to work with, that might easily diffuse a few degrees, which is all we need.
Thank you VERY much for bringing this to my attention!
And, I would say this is almost-exactly what I had in mind! (The “investor” I referred to is merely the role; any normal person could throw a dollar in the pot, becoming an investor; and any community could propose benefit-plans.) If those “government priorities & funding” were enshrined as a mandate to regularly-updated public concerns, funded with a singular tax-rate that is raised or lowered to match the total quantity of benefits, then I couldn’t tell the difference between our plans.
I see vast change becoming possible, when you can earn a competitive return from public good, funneled through taxation for efficiency and fairness’ sake. It’d bring a few orders of magnitude more resources to bear on our troubles. (And, if we manage to internalize most externalities, then pricing is a decent representation of real cost, which would provide systemic gains to efficiency.)
Ah, now that I’ve started looking further—an assessment of those Social Impact Bonds, here. They note ““Using a single outcome to define success may miss a range of other benefits that might result from the program—benefits that also have real value but will go unmeasured” (Berlin, 2016)” which is in-line with what I’d said elsewhere on the idea: whatever we don’t measure, tends to bite us in the butt. (That includes which groups we listen to!)
The report also mentions: “By design, nearly all of the early SIBs were premised on government-budget savings. Indeed, in those deals, payments to investors depended on those savings.” This would be a huge hinderance, with the only evaluated benefits being ‘costs to gov’t avoided’. Only by including as much of the real externality as feasibly measurable could we hope to incentivize the right solutions.
So, though SIBs are essentially the same mechanism I mention, they do seem to be falling short for reasons I’d expected and planned around. A singular legislative document, setting taxes to match whatever the percent benefits happen to be, with a devoted branch of the executive determining externalities with transparent metrics, statistical safeguards. Investors might also feel that “this is like a government bond”, if we give them a stronger institutional commitment. One-off policy goals find their funding pulled regularly, in contrast, which would spoil the potential of the investment. I’d guess I’m SIB-adjacent?
Oh, I like that level of specificity—a “contact via email” as opposed to “zoom-compatible” for example? I suppose the biggest determinant for which method to communicate would be the number of participants; video conference being the default for the diverse web of dialogue, while emails glean slow-thinking well for one-on-one exchanges. And, most ideas seem to progress from those pairwise dialogues early in development, towards larger congregations later in their cycle. So, each new idea might find value at different points from both the “email/zoom” distinctions? I’m sure there are others which might help, too!
Hello! I am slowly seeping into the Forum floorboards, dripping down the comments section, leaving meandering mumblings along an electronic thread. Most of my thoughts are obscure and dubiously specific. Expect errors; I do. And, I value dialogue not for compromise, but to send feelers out in all directions of the design-space. Those lateral extremes bind the constraints of good ideas, found only after pondering a few dozen flops! I’m glad to turn them around, to find any lucky inspirations. Most domains are a straight path up my alley; I follow specific problems into each arena, in turn.
My odd angle on your Key Considerations:
- Prosaic AGI: Considering Geoff Hinton’s GLOM, and implementations of equivariant capsules (which recently generalized to out-of-distribution grasps after only ten demonstrations!) as well as Sparse Representations of Numenta, the Mixture of Experts models which Jeff Dean seems to support in Google’s Pathways speech… it DOES seem like all the important components for a sort of general intelligence are in place. We even have networks extracting symbolic logic and constraints from a few examples. The barriers to composability, analogy, and equivariance don’t seem to be that high, and once those are managed I don’t see many other hinderances to AGI.
- Sharpness: Improvements in neural networks took years, from the effort of thousands of the best brains; we’re likely to have a SLOW take-off, unless the first AGI is magically thousands of times faster than us, too. (If so, why didn’t we build a real-time AGI when we had 1⁄1,000th the processing power?) And, each new improvement is likely to be more difficult to achieve than the last, to such an extent that AGI will hit a maximum—some “best algorithm”. That limit to algorithms necessitates a slowing rate of improvement, and we’re likely to already be close to that peak. (Narrow AI has already seen multiple 100x and 1,000x gains in performance characteristics, and that can’t go on forever.)
- Timeline: With the next round of AI-specialized chips due 2022 (Cerebras has a single chip the size of thousands of normal chips, with memory imbedded throughout to avoid the von Neumann Bottleneck) we’ll see a 100x boost to energy-efficiency, which was the real barrier to human-scale AI. Given that the latest AIs are ~1% of a human brain, then a 100x boost puts AI within striking-distance of humans, this next year! I expect AGI to be achievable within 5 years… just look at where neural networks were five years ago.
- Hardness: I suspect that AGI alignment will be forever hard. Like an alien intelligence, I don’t see how we can ever really trust it. Yet, I also suspect that narrow super-intelligences will provide us with MOST of the utility that could have been gained from AGI, and those narrow AIs will give us those gains earlier, cheaper, with greater explainability and safety. I would be happy banning AGI until narrow AI is tapped-out and we’ve had a sober conversation about the remaining benefits of AGI. If narrow AI turns-out to do almost everything we needed, then we can ban AGI without risk or loss. We won’t know if we really even need AGI, until we see the limits of narrow AI—and we are nowhere near those limits, yet!
An odd window to an unmentioned scenario:
If narrow super-intelligence is competent for almost all the things we would trust AI to do, such that a switch to AGI is expensive, risky, with a low margin: then, we wouldn’t need to worry about ‘missing-out’ if we ban AGI. From a glance at the decision-tree, it seems better to explore narrow AI fully, so that we can see how much value is left on the table for AGI to yield us.
Additionally, I expect AGI to be possible within the next 5 years. (You can hold me to that prediction!) Looking back five years, and at the recent capabilities toward generalization from few examples, equivariance, as well as formulating & testing symbolic expressions—we might already be close to the necessary algorithms. And, with companies like Cerebras offering orders of magnitude more energy-efficient compute in this next year, then human-brain-scale networks seem to be on the doorstep already.
[[Tangent of Details: GPT-3 and the like are ~1% the network scale of a human brain, and Cerebras’ chip will support AI up to 20% larger than such a ‘human’ connectome. You might be tempted to claim “neurons are more complex”, yet the proficiencies demonstrated with GPT-3, using only 1% of our stuff, betray the argument for biological superiority. AI is satisfied with 16-bit precision, for example. Our brains are heavily redundant and jumbly, so out-performing us might take much less effort. Heck, GPT-3 level performance is now possible with 25x times smaller network… “0.04% of a human brain”, yet it works as well as us.]]
So, narrow AI that uses 1/100th the compute can usually do the task fine. GPT-3 was writing convincing poetry. If someone can choose between a single AGI or a hundred narrow AIs, they’ll probably choose the latter. It would let you do 100x more stuff per second, and swapping between the networks loaded in memory would still allow you to utilize myriad task-specific AI. Those narrow AI will be easier to train AND verify, as well.
Let’s ban AGI, because I don’t think it’d help much, anyway!
Oh, my apologies for not linking to GLOM and such! Hinton’s work toward equivariance is particularly interesting because it allows an object to be recognized under myriad permutations and configurations; the recent use of his style of NN in “Neural Descriptor Fields” is promising—their robot learns to grasp from only ten examples, AND it can grasp even when pose is well-outside the training data—it generalizes!
I strongly suspect that we are already seeing the “FOOM,” entirely powered by narrow AI. AGI isn’t really a pre-requisite to self-improvement: Google used a narrow AI to lay their chips’ architecture, for AI-specialized hardware. My hunch is that these narrow AI will be plenty, yet progress will still lurch. Each new improvement is a harder-fought victory, for a diminishing return. Algorithms can’t become infinitely better, yet AI has already made 1,000x leaps in various problem-sets … so I don’t expect many more such leaps, ahead.
And, in regards to ’100x faster brain’… Suppose that an AGI we’d find useful starts at 100 trillion synapses, and for simplicity, we’ll call that the ‘processing speed’ if we run a brain in real-time. “100 trillion synapses-seconds per second” So, if we wanted a brain which was equally competent, yet also running 100x faster, then we would need 100x the computing power, running in parallel to speed operations. That would be 100x more expensive, and if you posit that you had such power on-hand today, then there must have been an earlier date when the amount of compute was only “100 trillion synapses-seconds per second”, enough for a real-time brain, only. You can’t jump past that earlier date, when only a real-time brain was feasible. You wouldn’t wait until you had 100x compute; your first AGI will be real-time, if not slower. GPT-3 and Dall-E are not ‘instantaneous’, with inference requiring many seconds. So, I expect the same from the first AGI.
More importantly, to that concept of “faster AGI is worth it”—an AGI that requires 100x more brain than a narrow AI (running at the same speed regardless of what that is) would need to be more than 100x as valuable. I doubt that is what we will find; the AGI won’t have magical super-insight compared to narrow AI given the same total compute. And, you could have an AGI that is 1/10th the size, in order to run it 10x faster, but that’s unlikely to be useful anywhere except a smartphone. For any given quantity of compute, you’d prefer the half-second-response super-sized brain over the micro-second-response chimp brain. At each of those quantities of compute, you’ll be able to run multiple narrow AIs at similar levels of performance to the singular AGI, so those narrow AIs are probably worth more.
As for banning AGI—I have no clue! Hardware isn’t really the problem; we’re still far from tech which could cheaply supply human-brain-scale AI to the nefarious individual. It’d really be nations doing AGI. I only see some stiff sanctions and inspections-type stuff, a la nuclear, as ever really happening. Deployment would be difficult to verify, especially if narrow AI is really good at most things such that we can’t tell them apart. If nations formed a kind of “NATO-for-AGI”, declaring publicly to attack any AGI? Only the existing winners would want to play on the side of reducing options for advancement like that, it seems. What do you think?
Another vantage-point for identifying where AND when to direct one’s research & energy:
Newly-OPENED avenues; when a new technology, or a radical change in policy, action, has just occurred. Those moments of change, especially when something crosses the ‘threshold of viability’, are key times to locate your research and re-evaluation. The actions which are available in that early-stage of viability also tend to have an OUTSIZED influence overall. Those shifts in viability open a range of actions to us, briefly, before behaviors settle-back into a new equilibrium.