Jade Leung: How Can We See the Impact of AI Strategy Research?

For now, the field of AI strat­egy is mostly fo­cused on ask­ing good ques­tions and try­ing to an­swer them. But what comes next? In this talk, Jade Le­ung, Head of Re­search at the Cen­ter for the Gover­nance of AI, dis­cusses how we should think about prac­ti­cal el­e­ments of AI strat­egy, in­clud­ing policy work, ad­vo­cacy, and brand­ing.

Below is a tran­script of Jade’s talk, which we’ve lightly ed­ited for clar­ity. You can also watch the talk on YouTube or read it on effec­tivealtru­ism.org.

The Talk

Tech­nol­ogy shapes civ­i­liza­tions. Tech­nol­ogy has en­abled us to hunt, gather, set­tle, and com­mu­ni­cate. Tech­nol­ogy to­day pow­ers our cities, ex­tends our lifes­pans, con­nects our ideas, and pushes the fron­tier of what it means to be hu­man.

Tech­nol­ogy has also fueled wars over power, ide­ol­ogy, pres­tige, his­tory, and mem­o­ries. In­deed, tech­nol­ogy has pushed us to the precipice of risk in less than a decade — a fleet­ing mo­ment in the times­pan of hu­man civ­i­liza­tion. [With just a few years of re­search,] we equipped our­selves with the abil­ity to wipe out the vast ma­jor­ity of the hu­man pop­u­la­tion with the atomic bomb.

If we rewind to the emer­gent stage of these trans­for­ma­tive tech­nolo­gies, we have to re­mem­ber that we are far from be­ing clear-eyed and pre­scient. In­stead, we’re some com­bi­na­tion of greedy, clue­less, con­fused, and reck­less. But ul­ti­mately, how we choose to nav­i­gate be­tween the op­por­tu­ni­ties and risks of trans­for­ma­tive tech­nolo­gies will define what we gain from these trans­for­ma­tive tech­nolo­gies — and also what risks we ex­pose our­selves to in the pro­cess.

This is the canon­i­cal challenge of gov­er­nance of these trans­for­ma­tive tech­nolo­gies. To­day, we’re in the early stages of nav­i­gat­ing a par­tic­u­lar tech­nol­ogy: ar­tifi­cial in­tel­li­gence (AI). It may be one of the most con­se­quen­tial tech­nolo­gies of our time and the most im­por­tant one for us to get right. But get­ting it right re­quires us to do some­thing that we’ve never done be­fore: For­mu­late a nav­i­ga­tion strat­egy with de­liber­ate cau­tion and ex­plicit al­tru­is­tic in­ten­tion. It re­quires us to have fore­sight and to ori­ent [our­selves] to­ward the long-term fu­ture. This is the challenge of AI gov­er­nance.

If we think about our his­tory and track record, our baseline is pretty far from op­ti­mal. That’s a very kind way of say­ing that it sucks. We’re not very good at gov­ern­ing trans­for­ma­tive tech­nolo­gies. Some­times we go down this path and [the jour­ney] is some­what safe and pros­per­ous. Some­times, we falter. We pur­sue the benefits of syn­thetic biol­ogy with­out think­ing about how that af­fects biolog­i­cal weapons. Some­times we stop our­selves at the start­ing line be­cause of fear, failing to pur­sue op­por­tu­ni­ties like atom­i­cally pre­cise man­u­fac­tur­ing or STEM cell re­search. And some­times we just fall into valleys.


Slide02

Dur­ing the Cuban Mis­sile Cri­sis, Pres­i­dent John F. Kennedy es­ti­mated that the chance of nu­clear war was one in three. One in three.

The re­al­ity is we’ve been pretty damn lucky. We de­serve no credit for [avoid­ing any of these catas­tro­phes]. But as my swim­ming coach once said, “If you’re re­ally, re­ally, re­ally bad at some­thing, you only need to try a lit­tle bit to be­come slightly bet­ter at it.” So here’s to be­ing slightly bet­ter at nav­i­gat­ing these trans­for­ma­tive tech­nolo­gies.

I think there are three goals in the AI strat­egy and gov­er­nance space [that can help us rise] slightly [above] our cur­rently awful baseline.

Slide05

Goal num­ber one: Gain a bet­ter un­der­stand­ing of what this land­scape looks like. Where are the moun­tains? Where are the valleys, the slip­pery slopes, the utopias? This is su­per-hard to do. It’s very spec­u­la­tive and un­cer­tain, so we need to be hum­ble. But we should try any­way.

The sec­ond thing we can try to do is equip our­selves with good heuris­tics for nav­i­ga­tion. If un­cer­tainty is an oc­cu­pa­tional haz­ard of work­ing in this space, then we can try to figure out, in gen­eral terms, what might be good and bad [to pur­sue]. How should we ori­ent our­selves? Which di­rec­tions do we want to go in?

The last goal is to trans­late these heuris­tics into ac­tual nav­i­ga­tion strate­gies. How do we en­sure that our heuris­tics make it into the hands of the peo­ple who are turn­ing this boat in cer­tain di­rec­tions?

If you’ll stick with my nav­i­ga­tion metaphor for a bit longer, we can think of the first goal as a map­ping ex­er­cise to de­ter­mine where the moun­tains, valleys, wa­ter sources, and cafes with good wifi are. The sec­ond goal is about equip­ping our­selves with a com­pass. If we know that there are ag­gres­sive rhinos to the south and good ve­gan restau­rants in the north, we’ll go north in­stead of south.

This metaphor is kind of fal­ling apart, but the third goal is the steer­ing wheel. You can’t use it if you’re in the back of the car. That’s ul­ti­mately what I want to fo­cus on to­day: How do we make sure that [our map and com­pass will be used to steer — i.e., to make real-world de­ci­sions about AI]?

[I have two rea­sons for fo­cus­ing] on this. First, AI strat­egy and gov­er­nance re­search is effec­tive when it hap­pens up­stream of ac­tion­able, real-world tac­tics and strate­gies. They can be rel­a­tively far up­stream. I think we would lose a lot of good re­search ques­tions if [we were always mo­ti­vated by] whether some­thing could in­form a de­ci­sion to­day. But I think it would be a mis­take for any­one who does AI strat­egy and gov­er­nance re­search to [avoid] think­ing about how they ex­pect their re­search to [play out] in rele­vant, real-world de­ci­sions.

That leads me to the sec­ond rea­son for fo­cus­ing: I don’t think we know how to do this [make our re­search ac­tion­able] well. I think we in­vest far more effort into un­der­stand­ing how to do good re­search than we do into un­der­stand­ing how to [come up with] good tac­tics. Don’t get me wrong: I don’t think we know how to do good re­search yet. We’re still try­ing to figure that out. And I find it hilar­i­ous that peo­ple think that I know how to do good re­search; if only you knew how lit­tle I know! But I think we need to in­vest far more pro­por­tional effort into [ask­ing our­selves]: Once we’ve done our re­search and have some in­sights in place, what do we do to [ap­ply] them and [in­fluence] the di­rec­tion in which we’re go­ing?

Slide06

With that in mind, let’s start at the end. What are the de­ci­sions that we want to in­fluence in the real world? Another way to ask this ques­tion is: Who is mak­ing the de­ci­sions that we want to change?

They fall into two broad cat­e­gories: (1) those de­vel­op­ing and de­ploy­ing AI and (2) those shap­ing the en­vi­ron­ment in which AI is de­vel­oped and de­ployed.

Slide07

Those de­vel­op­ing and de­ploy­ing AI in­clude re­searchers, re­search labs, com­pa­nies, and gov­ern­ments. those shap­ing the en­vi­ron­ment.

In terms of [the sec­ond group], there are a num­ber of differ­ent en­vi­ron­ments to shape:

* The re­search en­vi­ron­ment can be shaped by lab lead­ers, fun­ders, uni­ver­si­ties, and CEOs. They shape the kind of re­search that is be­ing in­vested in — i.e., the re­search con­sid­ered within the Over­ton win­dow.


* The leg­is­la­tive en­vi­ron­ment, which con­strains what can be de­ployed and how, can be shaped by leg­is­la­tors, reg­u­la­tors, states, and the peo­ple [be­ing gov­erned].


* The mar­ket en­vi­ron­ment, which can be shaped by in­vestors, fun­ders, con­sumers, and em­ploy­ees. They cre­ate in­cen­tives that drive cer­tain forms of de­vel­op­ment and de­ploy­ment, be­cause of sup­ply and de­mand.

Now, you can ei­ther be­come one of these de­ci­sion-mak­ers or you can be­come a per­son who in­fluences them. This is in no way a com­men­tary on your brilli­ance as hu­man be­ings. But none of you will be­come im­por­tant. I’m un­likely to be­come im­por­tant. The re­al­ity is that’s how the world works. If you do end up be­com­ing an im­por­tant per­son, the record­ing of this talk is your voucher for a free drink on me. But if you as­sume that I’m right, most of you are go­ing to fall into the cat­e­gory of peo­ple who in­fluence de­ci­sions as op­posed to mak­ing them.

There­fore, I’m go­ing to [spend the rest of this talk] fo­cus­ing on this ques­tion: How do we in­crease our abil­ity to in­fluence the de­ci­sions be­ing made [about AI]?
Slide08

There are many steps, but I see them fal­ling into two broad ar­eas. The first step is hav­ing good things to say. The sec­ond step is mak­ing sure that the peo­ple who mat­ter [are made aware of] these good things.

Slide09

A quick note on what I mean by “good”: I’m broadly con­ceiv­ing of all of us as good in the nor­ma­tive sense of steer­ing our world in a di­rec­tion that we want, and good in the prag­matic sense, in that a de­ci­sion-maker will be likely to ac­tu­ally go in that di­rec­tion be­cause it’s rea­son­able and falls within their timeframe.

Of­ten­times these two defi­ni­tions of good con­flict. For ex­am­ple, things you think will be good for the long-term fu­ture won’t [nec­es­sar­ily] be things that are tractable or rea­son­able from a de­ci­sion-maker’s point of view. I ac­knowl­edge that these two things are in ten­sion. It’s hard to figure out how to com­pro­mise be­tween them some­times.

Slide10

That be­ing said, I think AI strat­egy and gov­er­nance re­search can aim to have good things to say about a given de­ci­sion-maker’s (1) pri­ori­ties, (2) strate­gies, and (3) tac­tics. Those are three broad buck­ets to dig into a bit more.

Pri­ori­ties: I think pri­ori­ties are ba­si­cally peo­ple’s goals. What benefits are they in­cen­tivized to pur­sue, and what costs are they will­ing to bear in the pro­cess of pur­su­ing those goals? For ex­am­ple, if you man­age to con­vince a lab that safety leads to product ex­cel­lence, that can make safety a goal for the lab. If you man­age to con­vince a gov­ern­ment that co­op­er­a­tion is nec­es­sary for tech­nol­ogy lead­er­ship in an in­ter­na­tional world, that can make co­op­er­a­tion a goal.

Strate­gies: You may aim to have use­ful things to say about cer­tain strate­gies that [de­ci­sion-mak­ers adopt]. For ex­am­ple, re­source al­lo­ca­tion is a pretty com­mon strat­egy that one could aim to in­fluence. How are they dis­tribut­ing their bud­gets? How are they in­vest­ing in re­search and de­vel­op­ment efforts across var­i­ous streams? You also may have things to say about what a given ac­tor chooses to pro­mote or ad­vo­cate for ver­sus [ig­nore]. For ex­am­ple, in the case of in­fluenc­ing a gov­ern­ment, you might want them to pur­sue cer­tain pieces of leg­is­la­tion that can help you achieve cer­tain goals. In the case of labs, you might want them to in­vest in cer­tain types of new pro­grams or differ­ent work­streams.

Tac­tics: The third area is tac­tics. Th­ese in­clude pub­lic re­la­tions tac­tics. What do they sig­nal to the ex­ter­nal world, and how does that af­fect their abil­ity to achieve their goals? And what about re­la­tion­ship tac­tics — with whom do they co­or­di­nate and co­op­er­ate? Whom do they trust (and dis­trust)? Whom do they de­cide to in­vest in?

To make this a lit­tle bit more con­crete, I’m go­ing to pick on an ac­tor who needs a lot of good [ad­vice]: the U.S. gov­ern­ment.


Slide11

One of the biggest risks is that na­tion states will slide into techno-na­tion­al­ist eco­nomic blocs. The fram­ing of strate­gic com­pe­ti­tion that we have around AI now could ex­ac­er­bate a num­ber of AI risks. I won’t go into de­tail now, but we’ve writ­ten a fair amount about it at The Gover­nance of AI. We want to pre­vent na­tions from slid­ing into var­i­ous eco­nomic blocs and the na­tion­al­iza­tion of bits of AI re­search and de­vel­op­ment.

What would a car­i­ca­ture of the U.S. gov­ern­ment’s po­si­tion look like? (I say “car­i­ca­ture” be­cause it’s not at all clear that they ac­tu­ally have a co­her­ent strat­egy.) It looks some­thing like slid­ing into these eco­nomic blocs. And that’s a bad thing. Their [over­ar­ch­ing] pri­ori­ties are tech­nol­ogy lead­er­ship, in both an eco­nomic and mil­i­tary sense, with a corol­lary of pre­serv­ing and main­tain­ing na­tional se­cu­rity. Costs that they may be will­ing to bear in ex­treme cir­cum­stances in­clude any­thing that is re­quired to gain con­trol of an R&D pipeline and se­cure it within na­tional bor­ders.

Now they are mak­ing moves in the strat­egy and tac­tics space — for ex­am­ple, an­nounce­ments of ex­port con­trols that the U.S. gov­ern­ment made in Novem­ber 2018 in­di­cate that they want to pre­serve do­mes­tic ca­pac­ity for R&D at the cost of in­vest­ing in in­ter­na­tional efforts and transna­tional com­mu­ni­ties. They also in­di­cate an ex­plicit in­ten­tion of shut­ting out for­eign com­peti­tors and ad­ver­saries. [Over­all], their AI strate­gies and tac­tics point in the di­rec­tion of “Amer­ica first.” And the foot­notes there sug­gest that when Amer­ica is first, over the long term the world suffers. That’s too bad. So, those are the kinds of stances that the U.S. pos­ture points to­ward.

If one has [the chance to try per­suad­ing] the U.S. gov­ern­ment, one could aim to con­vince them that their pri­ori­ties, strate­gies, and tac­tics should move in a differ­ent di­rec­tion. For ex­am­ple, a de­sir­able pri­or­ity could be tech­nol­ogy lead­er­ship, but lead­er­ship could mean lead­ing with a global, cos­mopoli­tan view­point. You [could fo­cus on in­fluenc­ing them to] bear the cost of in­vest­ing in things like safety re­search in or­der to pur­sue this pri­or­ity in a re­spon­si­ble way. The strate­gies and tac­tics you could in­form them of when they con­duct this re­search could [in­volve in­ter­na­tional out­reach]. With whom should they ally them­selves and co­op­er­ate? What kinds of sig­nals should they send ex­ter­nally to en­sure that oth­ers with a similar view of tech­nol­ogy lead­er­ship will [take steps in] the same di­rec­tion?

This is the type of de­ci­sion set that you want to in­fluence when con­duct­ing up­stream AI strat­egy and gov­er­nance re­search. [Once you] have a broad sense of what you think is good, you have the mega-task of try­ing to make those good things hap­pen in the real world.

I have a few sug­ges­tions for how to ap­proach that.

Slide12

The first is to [fo­cus on]] a few tractable good things. I say “tractable” here to mean things that will make sense to, or sit well with, de­ci­sion-mak­ers, such that they are likely to do some­thing about it.

One way to do that is to find hooks be­tween things that you care about and things that a de­ci­sion-maker cares about. Find that in­ter­sec­tion or mid­dle part of the Venn di­a­gram. One canon­i­cal bifur­ca­tion — which I don’t ac­tu­ally like all that much — is the bifur­ca­tion be­tween near-term and long-term con­cerns. Near-term con­cerns are things that are poli­ti­cally salient. [They al­low you to] have a dis­cus­sion in Congress and not look nuts. Long-term con­cerns are of­ten things that make you look a lit­tle bit wacky. But there are some things at the in­ter­sec­tion that could lead you to talk about near-term con­cerns in a way that lays the foun­da­tion for long-term con­cerns that you ac­tu­ally care about and want to seed dis­cus­sions around.

For ex­am­ple, the au­toma­tion of man­u­fac­tur­ing jobs is a huge dis­cus­sion in the U.S. at the mo­ment. It’s a micro­cosm of a much larger-scale prob­lem [in­volv­ing] mas­sive la­bor dis­place­ment, eco­nomic dis­rup­tions, and the dis­tri­bu­tion of eco­nomic power in ways that could be un­de­sir­able. That’s a set of long-term con­cerns. But talk­ing about it in the con­text of truck drivers in the U.S. could be an in­road into mak­ing those long-term con­cerns rele­vant.

A similar thing can be said about the U.S. and China. Peo­ple in Wash­ing­ton, D.C. care about the U.S.’s pos­ture to­ward China, and what the U.S. does and sig­nals now will be rele­vant to how this par­tic­u­lar bilat­eral re­la­tion­ship pans out in the fu­ture. And that’s in­cred­ibly rele­vant for how cer­tain race dy­nam­ics pan out.

Once you’ve filtered for these things that are tractable, then you need to do the work of trans­lat­ing them in a di­gestible way for de­ci­sion-mak­ers.

Slide13

The as­sump­tion here is that de­ci­sion-mak­ers are of­ten very time-con­strained and at­ten­tion-con­strained. They will [be more likely to re­spond to mes­sages that are] easy to re­mem­ber and [re­layed] in the form of memes. And un­for­tu­nately, long, well-ar­gued, epistem­i­cally ro­bust pieces end up [hav­ing less im­pact] than we would hope.

Su­per­in­tel­li­gence is per­haps one of the best ex­am­ples. This is an in­cred­ibly epistem­i­cally ro­bust [topic]. But ul­ti­mately, the meme it was boiled down to for the vast ma­jor­ity of peo­ple was: “Smart Oxford aca­demics think AI is go­ing to kill us.” So don’t try to beat them with nu­ance. Try to just play this meme game and come up with bet­ter memes.

Here are three ex­am­ples of memes that are cur­rently in dan­ger of tak­ing off:

Slide14

1. The U.S. and China are in an arms race.
2. Who­ever wins will have a de­ci­sive strate­gic ad­van­tage.
3. AI safety is always or­thog­o­nal to perfor­mance.

It’s not clear to me that all of these things are true. And for some of them I’m quite sure that I don’t want them to be true. But they are be­ing prop­a­gated in ways that are in­form­ing de­ci­sions that are cur­rently be­ing made. I think that’s a bad thing.

One thing to fo­cus on, in terms of try­ing to have good things to say and mak­ing those good things heard, it to trans­late them into mes­sages that are similarly di­gestible.

Slide15

Can­di­dates for memes we might prop­a­gate are things like: “the equiv­a­lent of lead­ing in the AI space is to care about safety and gov­er­nance”; “the wind­fall dis­tri­bu­tions from a trans­for­ma­tive AI should be dis­tributed ac­cord­ing to some com­mon prin­ci­ples of good”; and “gov­er­nance doesn’t equal gov­ern­ment reg­u­la­tion, so mul­ti­ple ac­tors carry the re­spon­si­bil­ity to gov­ern well.” Un­less we prop­a­gate our mes­sages in easy ways, it’s go­ing to be very hard to com­pete with the bad nar­ra­tives out there.

Slide16

The last step is to en­sure that [our mes­sages] reach some cir­cles of in­fluence. To do that, model your ac­tor well. For ex­am­ple, if you want to tar­get a spe­cific lab, try to figure out who the de­ci­sion-mak­ers are, what they care about, and whom they listen to. Then, tar­get your spe­cific mes­sages and work with those par­tic­u­lar cir­cles of in­fluence in or­der to get heard. That’s my hot take on how [re­search] can be made slightly more rele­vant in a real-world sense.

Some fi­nal points that I want you to take away:

Slide17

Ul­ti­mately, the im­pact of this work is con­tin­gent on how good our tac­tics are. The claim that I’ve made to­day is that we need to put far more work into this. I’m un­cer­tain how well we can do that — and how much effort we should put into it. But broadly speak­ing, as soon as we have rele­vant in­sights, we should be in­ten­tional about in­vest­ing in prop­a­gat­ing them.

Slide18

Se­cond, ex­er­cis­ing this in­fluence is go­ing to be a messy poli­ti­cal game. The world’s [ap­proach to] de­ci­sion-mak­ing is mud­dled, parochial, and sub­op­ti­mal. We can have a bit of a cry about how sub­op­ti­mal it is. But ul­ti­mately, we need to work within that sys­tem. [Us­ing] effec­tive nav­i­ga­tion strate­gies is go­ing to re­quire us to work within a set of poli­tics that we may dis­agree with to some ex­tent in terms of val­ues. But we need to be tac­ti­cal and do it.

Slide19

Fi­nally, gov­er­nance is a very hard nav­i­ga­tion challenge. We have no track record of do­ing it well, so we should be hum­ble about our abil­ity to do it. At the mo­ment we don’t know that we can suc­ceed, but we can try our best.

Moder­a­tor: Thank you for that talk. I’d like to start with some­thing that you ended with. You said that we’re deal­ing with sys­tems that are difficult to op­er­ate in. To what ex­tent do you even think it’s pos­si­ble to get peo­ple to think more clearly? Should we in­stead just be fo­cus­ing on in­sti­tu­tional change?

Jade: I think there are things that we need to try out. In­sti­tu­tional change is valuable. At­tempt­ing to com­mu­ni­cate through ex­ist­ing de­ci­sion-mak­ers in ex­ist­ing in­sti­tu­tions is valuable. But I don’t think we know enough about what’s nec­es­sary and how tractable cer­tain things are in or­der to put all of our eggs in one bas­ket.

So maybe one meta-point is that as a field, we need to di­ver­sify our strate­gies. For ex­am­ple, I think some peo­ple should be fo­cus­ing on mod­el­ing ex­ist­ing de­ci­sion-mak­ers — par­tic­u­larly de­ci­sion-mak­ers that we think [have enough cred­i­bil­ity] to be rele­vant. And I think oth­ers could take the view that ex­ist­ing in­sti­tu­tions are in­suffi­cient, and that in­sti­tu­tional change is ul­ti­mately what is re­quired. And then that be­comes a par­tic­u­lar strat­egy that is pur­sued.

The field is shrouded in enough un­cer­tainty about what’s go­ing to be rele­vant and tractable that I would en­courage folks to di­ver­sify.

Moder­a­tor: You fo­cused on the U.S. gov­ern­ment as one of the ac­tors that peo­ple might pay par­tic­u­lar at­ten­tion to. Are there oth­ers that you would recom­mend peo­ple pay at­ten­tion to?

Jade: Yeah. I gen­er­ally ad­vo­cate for fo­cus­ing on mod­el­ing gov­ern­ments [based in places that are likely to be rele­vant] more than par­tic­u­lar pri­vate ac­tors. For ex­am­ple, the Chi­nese gov­ern­ment would be worth fo­cus­ing on. I think we have a bet­ter shot at mod­el­ing them based on his­tory. There are more var­i­ants and anoma­lies in pri­vate spaces.

Se­cond, fo­cus on or­ga­ni­za­tions that are im­por­tant de­vel­op­ers of this tech­nol­ogy [AI]. The canon­i­cal ones are Deep­Mind and OpenAI. There are oth­ers worth fo­cus­ing on too.

Moder­a­tor: Some­one could con­strue your ad­vice as try­ing to un­der­stand what’s hap­pen­ing cur­rently in the policy land­scape and in a va­ri­ety of aca­demic dis­ci­plines that peo­ple spend their lives in, and then meld­ing all of those to­gether into a recom­men­da­tion for poli­cy­mak­ers. That can feel a lit­tle over­whelming as a piece of ad­vice. If some­one has to start some­where and hasn’t worked in this field be­fore, what would you say is the min­i­mum that they should be pay­ing at­ten­tion to?

Jade: Good ques­tion. If you’re not go­ing to try to do ev­ery­thing (which is good ad­vice), I think one can nar­row down the space of things to fo­cus on based on com­pet­i­tive ad­van­tage. So think through which are­nas of policy de­ci­sions you’re likely to be able to in­fluence the most. Then, fo­cus speci­fi­cally on the sub­set of ac­tors in that space.

Moder­a­tor: And as­sum­ing a per­son doesn’t have ex­per­tise in one area and is just try­ing to fill a vac­uum of un­der­stand­ing some­where in this AI strat­egy realm, what would you [recom­mend] some­body get some ex­per­tise in?

Jade: That’s a hard ques­tion. There are a lot of re­sources that are out there that can help ori­ent you to the re­search space. Good places to start would be our web­site. There’s a re­search agenda, which has a lot of foot­notes and refer­ences that are very use­ful. And then there’s also a blog post by the safety re­search team at Deep­Mind — they’ve com­piled a set of re­sources to help folks get started in this space.

If you’re par­tic­u­larly in­ter­ested in go­ing deeper, you’re always wel­come to email me.

No comments.