Problem areas beyond 80,000 Hours’ current priorities

Why we wrote this post

At 80,000 Hours we’ve gen­er­ally fo­cused on find­ing the most press­ing is­sues and the best ways to ad­dress them.

But even if some is­sue is ‘the most press­ing’—in the sense of be­ing the high­est im­pact thing for some­one to work on if they could be equally suc­cess­ful at any­thing—it might eas­ily not be the high­est im­pact thing for many peo­ple to work on, be­cause peo­ple have var­i­ous tal­ents, ex­pe­rience, and tem­per­a­ments.

More­over, the more peo­ple in­volved in a com­mu­nity, the more rea­son there is for them to spread out over differ­ent is­sues. There will even­tu­ally be diminish­ing re­turns as more peo­ple work on the same set of is­sues, and both the value of in­for­ma­tion and the value of ca­pac­ity build­ing from ex­plor­ing more ar­eas will be greater if more peo­ple are able to take ad­van­tage of that work.

We’re also pretty un­cer­tain which prob­lems are the high­est im­pact things to work on—even for peo­ple who could work on any­thing equally suc­cess­fully.

For ex­am­ple, maybe we should be fo­cus­ing much more on pre­vent­ing great power con­flict than we have been. After all, the first plau­si­ble ex­is­ten­tial risk to hu­man­ity was the cre­ation of the atom bomb; it’s easy to imag­ine that wars could in­cu­bate other, even riskier tech­nolog­i­cal ad­vance­ments.

Or maybe there is some dark horse cause area—like re­search into surveillance—that will turn out to be way more im­por­tant for im­prov­ing the fu­ture than we thought.

Per­haps for these rea­sons, many of our ad­vi­sors guess that it would be ideal if 5-20% of the effec­tive al­tru­ism com­mu­nity’s re­sources were fo­cused on is­sues that the com­mu­nity hasn’t his­tor­i­cally been as in­volved in, such as the ones listed be­low. We think we’re cur­rently well be­low this frac­tion, so it’s plau­si­ble some of these ar­eas might be bet­ter for some peo­ple to go into right now than our top pri­or­ity prob­lem ar­eas.

Who is best suited to work on these other is­sues? Pioneer­ing a new prob­lem area from an effec­tive al­tru­ism per­spec­tive is challeng­ing, and in some ways harder than work­ing on a pri­or­ity area, where there is bet­ter train­ing and in­fras­truc­ture. Work­ing on a less-re­searched prob­lem can re­quire a lot of cre­ativity and crit­i­cal think­ing about how you can best have a pos­i­tive im­pact by work­ing on the is­sue. For ex­am­ple, it likely means work­ing out which ca­reer op­tions within the area are the most promis­ing for di­rect im­pact, ca­reer cap­i­tal, and ex­plo­ra­tion value, and then pur­su­ing them even if they differ from what most other peo­ple in the area tend to value or fo­cus on. You might even even­tu­ally need to ‘cre­ate your own job’ if pre-ex­ist­ing po­si­tions in the area don’t match your pri­ori­ties. The ideal per­son would there­fore be self-mo­ti­vated, cre­ative, and will­ing to chart the wa­ters for oth­ers, as well as have a strong in­ter­est or rele­vant ex­pe­rience in one of these less-ex­plored is­sues.

We com­piled the fol­low­ing lists by com­bin­ing sug­ges­tions from 6 of our ad­vi­sors with our own ideas, judge­ment, and re­search. We were look­ing for is­sues that might be very im­por­tant, es­pe­cially for im­prov­ing the long-term fu­ture, and which might be cur­rently ne­glected by peo­ple think­ing from an effec­tive al­tru­ism per­spec­tive. If some­thing was sug­gested twice, we took that as a pre­sump­tion in fa­vor of in­clud­ing it.

We’re very un­cer­tain about the value of work­ing on any one of these prob­lems, but we think it’s likely that there are is­sues on these lists (and es­pe­cially the first one) that are as press­ing as our high­est pri­or­ity prob­lem ar­eas.

What are the pros and cons of work­ing in each of these ar­eas? Which are less tractable than they ap­pear, or more im­por­tant? Which are already be­ing cov­ered ad­e­quately by ex­ist­ing groups we don’t know enough about? What po­ten­tially press­ing prob­lems is this list miss­ing?

We’d be ex­cited to see peo­ple dis­cussing these ques­tions in the com­ments, and to check out rele­vant ma­te­rial from any read­ers who have ex­ist­ing ex­per­tise in these ar­eas. We’ve linked to a few re­sources for each area that seem in­ter­est­ing or helpful, though we don’t always agree with ev­ery­thing they say and we wouldn’t be surpised if in many cases there are bet­ter re­sources out there.

We hope that for peo­ple who want to work on is­sues other than those we talk most about, these lists can give them some fruit­ful ideas to ex­plore.

Po­ten­tial high­est priorities

The fol­low­ing are some global is­sues that seem like they might be es­pe­cially press­ing from the per­spec­tive of im­prov­ing the long-term fu­ture. We think these have a chance of be­ing as press­ing for peo­ple to work on as our pri­or­ity prob­lems, but we haven’t in­ves­ti­gated them enough to know.

Great power conflict

A large vi­o­lent con­flict be­tween ma­jor pow­ers such as the US, Rus­sia or China could be the most dev­as­tat­ing event to oc­cur in hu­man his­tory, and could re­sult in billions of deaths. In ad­di­tion, mis­trust be­tween ma­jor pow­ers makes it harder for them to co­or­di­nate on arms con­trol or en­sure the safe use of new tech­nolo­gies.

Though there is con­sid­er­able ex­ist­ing work in this area, peace­build­ing mea­sures aren’t always aimed at re­duc­ing the chance of the worst out­comes. We’d like to see more re­search into how to re­duce the chance of the most dan­ger­ous con­flicts break­ing out and the dam­age they would cause, as well as im­ple­men­ta­tion of the most effec­tive miti­ga­tion strate­gies.

Great power con­flict is the sub­ject of a large body of liter­a­ture span­ning poli­ti­cal sci­ence, in­ter­na­tional re­la­tions, mil­i­tary stud­ies, and his­tory. Get started with ac­cessible ma­te­ri­als on con­tem­po­rary great power dy­nam­ics—this blog post for a brief and sim­ple ex­pla­na­tion, this re­port from Brook­ings on the chang­ing role of the US on the world stage, this pod­cast se­ries on cur­rent mil­i­tary and strate­gic dy­nam­ics from the In­ter­na­tional In­sti­tute for Strate­gic Stud­ies, and this talk on the risks from great power con­flict us­ing the scale, solv­abil­ity, and ne­glect­ed­ness frame­work.

Use­ful books in this area in­clude After Tamer­lane: The Rise and Fall of Global Em­pires, 1400-2000, and Destined for War: Can Amer­ica and China Es­cape Thucy­dides’s Trap?

Global governance

In­ter­na­tional gov­ern­ing in­sti­tu­tions might play a cru­cial role in our abil­ity to nav­i­gate global challenges, so im­prov­ing them has the po­ten­tial to re­duce risks of global catas­tro­phes. More­over, in the fu­ture we may see the cre­ation of new global in­sti­tu­tions that could be very long-last­ing, es­pe­cially if the in­ter­na­tional com­mu­nity trends to­ward more co­he­sive gov­ern­ing bod­ies—and get­ting these right could be very important

The Biolog­i­cal Weapons Con­ven­tion is an ex­am­ple of one way in­sti­tu­tions like the UN can help co­or­di­nate states to re­duce global risks — but it also demon­strates cur­rent weak­nesses of this ap­proach, like un­der­fund­ing and weak en­force­ment mechanisms.

There doesn’t seem to be as much work on im­prov­ing global gov­er­nance as you might ex­pect —es­pe­cially with an eye to­ward re­duc­ing global catas­trophic risks. Here are a few pieces we know of:

We’d be keen to see more re­search on what gov­er­nance re­forms might be best for im­prov­ing the long-run fu­ture.

Gover­nance of outer space

It seems pos­si­ble that hu­man­ity will at some point set­tle outer space. If it does, the sheer scale of the ac­cessible uni­verse makes what it does there enor­mously im­por­tant.

Cur­rently there is no agree­ment on how to de­cide what hap­pens in space, should set­tle­ment be­come pos­si­ble. The Outer Space Treaty of 1967 pro­hibits coun­tries from claiming sovereignty over any­thing in space, but at­tempts to agree on more than that have failed to achieve con­sen­sus.

Who ends up in con­trol of re­sources in space will nat­u­rally shift how they are used, and might in­fluence vast num­bers of lives. Fur­ther­more, hav­ing agree­ments on how space is di­vided be­tween groups might avoid a ma­jor con­flict or a harm­ful rush to claim re­sources, and in­stead foster co­op­er­a­tion or com­pro­mise be­tween differ­ent par­ties.

To make more con­crete one pos­si­ble way things could go wrong: one su­per­power may be alarmed by an­other su­per­power that finds it­self on the verge of claiming and set­tling Mars, as they would an­ti­ci­pate even­tu­ally be­ing eclipsed eco­nom­i­cally and mil­i­tar­ily.

De­spite the huge stakes, gov­er­nance of space is an ex­tremely niche area of study and ad­vo­cacy. As a re­sult, ma­jor progress could prob­a­bly be made by a re­search com­mu­nity fo­cused on this is­sue, even just by ap­ply­ing fa­mil­iar les­sons from re­lated fields of law and so­cial sci­ence.

Ar­guably it is pre­ma­ture to work on this prob­lem be­cause ac­tual space set­tle­ment ap­pears so far off. While this is an im­por­tant point, we don’t think this is de­ci­sive for 4 rea­sons.

First, le­gal ar­range­ments like con­sti­tu­tions and in­ter­na­tional treaties are of­ten ‘sticky’ be­cause they are difficult to rene­go­ti­ate. Se­cond, it may be eas­ier to agree on fair pro­cesses for split­ting re­sources in space while set­tle­ment re­mains far in the fu­ture, as it will be harder for in­ter­est groups to fore­see what pe­cu­liar rules would benefit them in par­tic­u­lar. Third, hu­man­ity may ex­pe­rience an­other ‘in­dus­trial rev­olu­tion’ in the next cen­tury driven by AI or atomic scale man­u­fac­tur­ing, which would al­low space set­tle­ment to be­gin sooner than seems likely to­day. Fourth, once set­tle­ment be­comes pos­si­ble there will likely be a rush to agree on how to man­age the pro­cess, and the more prepa­ra­tion has been com­pleted ahead of that mo­ment the bet­ter the out­come is likely to be.

This blog post by To­bias Bau­mann fleshes out this case and sug­gests next steps peo­ple could take if they’re in­ter­ested in us­ing their ca­reer to study this prob­lem.

Vot­ing reform

We of­ten elect our lead­ers with ‘first-past-the-post’-style vot­ing, but this can eas­ily lead to per­verse out­comes. Bet­ter vot­ing meth­ods could lead to bet­ter in­sti­tu­tional de­ci­sion-mak­ing, bet­ter gov­er­nance in gen­eral, and bet­ter in­ter­na­tional co­or­di­na­tion.

De­spite these po­ten­tial benefits, ideas in this space of­ten get lit­tle at­ten­tion. One rea­son might be that cur­rent poli­ti­cal lead­ers—those with the most power to in­sti­tute re­forms—have lit­tle in­cen­tive to change the sys­tems that brought them to power. This might make this area par­tic­u­larly difficult to make progress in, though we still think ad­di­tional effort in this area may be promis­ing.

To learn more check out re­sources from the Cen­ter for Elec­tion Science and our pod­cast epi­sode with Aaron Ham­lin.

A re­lated is­sue is the sys­tem­atic lack of rep­re­sen­ta­tion of fu­ture gen­er­a­tions’ in­ter­ests in policy mak­ing. One group try­ing to ad­dress this in the UK is the All Party Par­li­a­men­tary Group for Fu­ture Gen­er­a­tions.

There is also the im­por­tance of vot­ing se­cu­rity to pre­vent con­tested elec­tions, dis­cussed in our in­ter­view with Bruce Sch­neier.

Im­prov­ing in­di­vi­d­ual rea­son­ing or cognition

The case here is similar to the case for im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing: bet­ter rea­son­ing and cog­ni­tive ca­pac­i­ties usu­ally make for bet­ter out­comes, es­pe­cially when prob­lems are sub­tle or com­plex. And as with in­sti­tu­tions, work on im­prov­ing in­di­vi­d­ual de­ci­sion-mak­ing is likely to be helpful no mat­ter what challenges the fu­ture throws up.

Strate­gies for im­prov­ing rea­son­ing might in­clude pro­duc­ing tools, train­ings, or re­search into how to best make bet­ter fore­casts or de­ci­sions, or come to sen­si­ble views on com­plex top­ics. Strate­gies for im­prov­ing cog­ni­tion might take a va­ri­ety of forms, e.g., re­search­ing safe and benefi­cial nootrop­ics.

Although fo­cus­ing on in­di­vi­d­u­als seems to us like it will usu­ally be less effec­tive for tack­ling global prob­lems than tak­ing a more in­sti­tu­tional ap­proach, it may be more promis­ing if in­ter­ven­tions can in­fluence large seg­ments of the pop­u­la­tion or be tar­geted to­ward the most in­fluen­tial peo­ple. See the Up­date Pro­ject for an ex­am­ple of the lat­ter kind of strat­egy.

Global pub­lic goods

Many of the biggest challenges we face have the char­ac­ter of global ‘pub­lic goods’ prob­lems—mean­ing ev­ery­one is worse off be­cause no par­tic­u­lar ac­tors are prop­erly in­cen­tivized to tackle the prob­lem, and they in­stead pre­fer to ‘free-ride’ on the efforts of oth­ers.

If we could make so­ciety bet­ter at pro­vid­ing pub­lic goods in gen­eral, we might be able to make progress on many challenges at once. One idea we’ve dis­cussed that both has promise and faces many challenges is quadratic fund­ing, but the space for pos­si­ble in­ter­ven­tions here seems enor­mous.

Another po­ten­tial ap­proach here is im­prov­ing poli­ti­cal pro­cesses. Govern­ments have enor­mous power and are the bod­ies we most of­ten turn to to tackle pub­lic goods prob­lems. Shift­ing how this power is used even a lit­tle can have sub­stan­tial and po­ten­tially long-last­ing effects. Check out our pod­cast epi­sode with Glen Weyl to learn about cur­rent and fairly rad­i­cal ideas in this space.

If you’re in­ter­ested in tack­ling these is­sues, learn­ing product de­sign, gain­ing ex­pe­rience in ad­vo­cacy or poli­tics, or study­ing eco­nomics may all be use­ful first steps.

Surveillance

We’d be keen to see more re­search into bal­anc­ing the risks and benefits of surveillance by states and other ac­tors, es­pe­cially as tech­nolog­i­cal progress makes surveillance on a mass scale easy and af­ford­able.

Some have ar­gued that so­phis­ti­cated surveillance tech­niques might be nec­es­sary to pro­tect civ­i­liza­tion from risks posed by ad­vanc­ing tech­nol­ogy with de­struc­tive ca­pa­bil­ities (for ex­am­ple see Nick Bostrom’s ar­ti­cle ‘The Vuln­er­a­ble World Hy­poth­e­sis’); at the same time, many warn of the dan­gers wide­spread surveillance poses not only to pri­vacy but to valuable forms of poli­ti­cal free­dom (ex­am­ple).

Be­cause of these con­flicts, it may be es­pe­cially use­ful to de­velop ways of mak­ing surveillance more com­pat­i­ble with pri­vacy and pub­lic over­sight.

Atomic scale manufacturing

Both the risks and benefits of ad­vances in this tech­nol­ogy seem like they might be sig­nifi­cant, and there is cur­rently lit­tle effort to shape its tra­jec­tory. How­ever, there is also rel­a­tively lit­tle in­vest­ment go­ing into mak­ing atomic-scale man­u­fac­tur­ing work right now, which re­duces the ur­gency of the is­sue.

To learn more, read this pop­u­lar ar­ti­cle by Eric Drexler, a cause re­port from the Open Philan­thropy Pro­ject, or listen to our pod­cast epi­sode with Chris­tine Peter­son.

Broadly pro­mot­ing pos­i­tive values

If pos­i­tive val­ues like al­tru­ism and con­cern for other sen­tient be­ings were more wide­spread, then so­ciety might be able to bet­ter deal with a wide range of other prob­lems—in­clud­ing prob­lems that haven’t come up yet but might in the fu­ture, such as how to treat con­scious ma­chine in­tel­li­gences. More­over, there could be ways that the val­ues held by so­ciety to­day or in the near fu­ture get ‘locked in’ for a long time, for ex­am­ple in con­sti­tu­tions, mak­ing it im­por­tant that pos­i­tive val­ues are wide­spread be­fore such a point.

We’re un­sure about the range of things an im­pact­ful ca­reer aimed at pro­mot­ing pos­i­tive val­ues could in­volve, but one strat­egy would be to pur­sue a po­si­tion that gives you a plat­form for ad­vo­cacy (e.g. jour­nal­ist, blog­ger, pod­caster, aca­demic, or pub­lic in­tel­lec­tual and then us­ing that po­si­tion to speak and write about these ideas.

Ad­vo­cacy could be built around ideas such as an­i­mal welfare, moral philos­o­phy (in­clud­ing util­i­tar­i­anism or the ‘golden rule’), con­cern for for­eign­ers, or other themes.

In the con­text of cause pri­ori­ti­za­tion within the effec­tive al­tru­ism com­mu­nity, some have ar­gued for the im­por­tance of spread­ing pos­i­tive val­ues through work­ing to im­prove the welfare of farmed an­i­mals (com­par­ing it to AI safety re­search), while oth­ers push back against this view.

Civ­i­liza­tion resilience

We might be able to sig­nifi­cantly in­crease the chance that, if a catas­tro­phe does hap­pen, civ­i­liza­tion sur­vives or gets re­built. How­ever, mea­sures in this space re­ceive very lit­tle at­ten­tion to­day.

To learn more, see our pod­cast epi­sode on the de­vel­op­ment of al­ter­na­tive food sources, this pa­per on re­fuges and our pod­cast epi­sode with Paul Chris­ti­ano.

S-risks

An ‘s-risk’ is a risk of an out­come much worse than ex­tinc­tion. Re­search work out how to miti­gate these risks is a sub­set of global pri­ori­ties re­search that might be par­tic­u­larly ne­glected and im­por­tant. Read more.

Whole brain emulation

This is a strat­egy for cre­at­ing ar­tifi­cial in­tel­li­gence by repli­cat­ing the func­tion­al­ity of the brain in soft­ware. If suc­cess­ful, whole brain em­u­la­tion could en­able dra­matic new forms of in­tel­li­gence—in which case steer­ing the de­vel­op­ment of this tech­nique could be cru­cial. Read a ten­ta­tive out­line of the risks as­so­ci­ated with whole brain em­u­la­tion.

Risks of sta­ble totalitarianism

Bryan Ca­plan has writ­ten about the worry that ‘sta­ble to­tal­i­tar­i­anism’ could arise in the fu­ture, es­pe­cially if we move to­ward a more unified world gov­ern­ment (per­haps in or­der to solve other global prob­lems) or if cer­tain tech­nolo­gies—like rad­i­cal life ex­ten­sion or bet­ter surveillance tech­nolo­gies—make it pos­si­ble for to­tal­i­tar­ian lead­ers to rule for longer.

We think more re­search in this area would be valuable. For in­stance, we’d be ex­cited to see fur­ther anal­y­sis and test­ing of Ca­plan’s ar­gu­ment, as well as peo­ple work­ing on how to limit the po­ten­tial risks from these tech­nolo­gies and poli­ti­cal changes if they do come about.

Risks from malev­olent actors

A blog post by David Althaus and To­bias Bau­mann ar­gues that when peo­ple with some or all of the so-called ‘dark tetrad’ traits—nar­cis­sism, psy­chopa­thy, Machi­avel­li­anism, and sadism—are in po­si­tions of power or in­fluence, this plau­si­bly in­creases the risk of catas­tro­phes that could in­fluence the long-term fu­ture.

Devel­op­ing bet­ter mea­sures of these traits, they sug­gest—as well as good tests of these mea­sures—could help us make our in­sti­tu­tions less li­able to be in­fluenced by such ac­tors. We could, for in­stance, make ‘non-malev­olence’ a con­di­tion of hold­ing poli­ti­cal office or hav­ing sway over pow­er­ful new tech­nolo­gies.

While it’s not clear how large of a prob­lem malev­olent in­di­vi­d­u­als in so­ciety are com­pared to other is­sues, there is his­tor­i­cal prece­dent for malev­olent ac­tors com­ing to power—Hitler, Stalin, and Mao plau­si­bly had strong dark tetrad traits—and per­haps this wouldn’t have hap­pened if there had been bet­ter pre­cau­tions in place. If so, this sug­gests that care­ful mea­sures could pre­vent fu­ture bad events of a similar scale (or worse) from tak­ing place.

Safe­guard­ing liberal democracy

Liberal democ­ra­cies seem more con­ducive to in­tel­lec­tual progress and eco­nomic growth than other forms of gov­er­nance that have been tried so far, and per­haps also to peace and co­op­er­a­tion (at least with other democ­ra­cies). Poli­ti­cal de­vel­op­ments that threaten to shift liberal democ­ra­cies to­ward au­thor­i­tar­i­anism there­fore may be risk fac­tors for a va­ri­ety of dis­asters (like great power con­flicts), as well as for so­ciety gen­er­ally go­ing in a more nega­tive di­rec­tion.

A great deal of effort—from poli­ti­cal sci­en­tists, poli­cy­mak­ers and poli­ti­ci­ans, his­to­ri­ans, and oth­ers—already goes into un­der­stand­ing this situ­a­tion and pro­tect­ing and pro­mot­ing liberal democ­ra­cies, and we’re not sure how to im­prove upon this.

How­ever, there are likely to be some promis­ing in­ter­ven­tions in this area that are cur­rently rel­a­tively ne­glected, such as vot­ing re­form (dis­cussed above) or im­prov­ing elec­tion se­cu­rity in or­der to in­crease the effi­cacy and sta­bil­ity of de­omo­cratic pro­cesses. A va­ri­ety of other work, like good jour­nal­ism or broadly pro­mot­ing pos­i­tive val­ues, also likely in­di­rectly con­tributes to this area.

Recom­mender sys­tems at top tech firms

The tech­nol­ogy in­volved in recom­mender sys­tems—such as those used by Face­book or Google—may turn out to be im­por­tant for pos­i­tively shap­ing progress in AI safety, as ar­gued here.

Im­prov­ing recom­mender sys­tems may also help provide peo­ple with more ac­cu­rate in­for­ma­tion and po­ten­tially im­prove the qual­ity of poli­ti­cal dis­course.

We may need to in­vest more to tackle fu­ture problems

It may be that the best op­por­tu­ni­ties for do­ing good from a longter­mist per­spec­tive lie far in the fu­ture—es­pe­cially if re­sources can be suc­cess­fully in­vested now to yield greater lev­er­age later. How­ever, right now we have no way of effec­tively and se­curely in­vest­ing re­sources long-term.

In par­tic­u­lar, there are few if any fi­nan­cial ve­hi­cles that can be rea­son­ably ex­pected to per­sist for more than 100 years while also earn­ing good in­vest­ment re­turns and re­main­ing se­cure. We’re un­sure in gen­eral how much peo­ple should be in­vest­ing vs. spend­ing now on the most press­ing causes. But it seems at least worth­while to look more into how such philan­thropic ve­hi­cles might be set up.

Founders Pledge — an or­gani­sa­tion that en­courages effec­tive giv­ing for en­trepreneurs — is cur­rently ex­plor­ing this idea and is ac­tively seek­ing in­put.

Learn more about this topic by listen­ing to our pod­cast epi­sode with Philip Tram­mell.

Other longter­mist issues

We’re also in­ter­ested in the fol­low­ing is­sues, but at this point think that work on them is likely some­what less effec­tive for sub­stan­tially im­prov­ing the long-term fu­ture than work on the is­sues listed above.

Eco­nomic growth

Speed­ing up eco­nomic growth doesn’t seem as use­ful as more tar­geted ways to im­prove the fu­ture, and in gen­eral we favour differ­en­tial de­vel­op­ment. How­ever, speed­ing up growth might still have large benefits, both for im­prov­ing long-term welfare, and per­haps also for re­duc­ing ex­is­ten­tial risks. For de­bate on the long-term value of eco­nomic growth check out our pod­cast epi­sode with Tyler Cowen.

The causes of growth already see con­sid­er­able re­search within eco­nomics, though this area is still more ne­glected than many top­ics. Po­ten­tial strate­gies for in­creas­ing growth in­clude trade re­form (which also has the po­ten­tial to re­duce con­flict), land use re­form, and in­creas­ing aid spend­ing and effec­tive­ness.

Science policy and infrastructure

Scien­tific re­search has been an enor­mous driver of hu­man welfare. How­ever, sci­ence policy and in­fras­truc­ture are not always well-de­signed to in­cen­tivize re­search that most benefits so­ciety in the long-term.

For ex­am­ple, we’ve ar­gued that some sci­en­tific and tech­nolog­i­cal de­vel­op­ments can in­crease risks of catas­tro­phe, which bet­ter in­sti­tu­tional checks might be able to help re­duce.

More pro­saically, sci­en­tific progress is of­ten driven more by what is com­mer­cially valuable, in­ter­est­ing, or pres­ti­gious than by con­sid­er­a­tions of long-run pos­i­tive im­pact. In gen­eral, we fa­vor differ­en­tial de­vel­op­ment in sci­ence and tech­nol­ogy over in­dis­crim­i­nate progress, which bet­ter sci­ence poli­cies or in­sti­tu­tional de­sign may help en­able.

This sug­gests that there is room for im­prov­ing sys­tems shap­ing sci­en­tific re­search and in­creas­ing their benefits go­ing for­ward. We’re par­tic­u­larly keen on peo­ple cre­at­ing struc­tures or in­cen­tives to push sci­en­tific re­search in more pos­i­tive and less risky di­rec­tions. Read more.

Mi­gra­tion restrictions

This strat­egy has the po­ten­tial to greatly in­crease eco­nomic growth, in­ter­cul­tural un­der­stand­ing, and cos­mopoli­tanism—as well as help mi­grants di­rectly. How­ever, it also faces strong op­po­si­tion and so car­ries poli­ti­cal risk.

Read more from the Open Philan­thropy Pro­ject, OpenBorders.info, or see the book Open Borders: The Science and Ethics of Im­mi­gra­tion.

Aging

Re­cent ad­vances in the sci­ence of ag­ing have made it seem more fea­si­ble than was pre­vi­ously thought to rad­i­cally slow the ag­ing pro­cess and per­haps al­low peo­ple to live much longer. If these efforts are suc­cess­ful, some have ar­gued there would be pos­i­tive long-run effects on so­ciety, as peo­ple would be led to think in more long-term ways and could keep work­ing pro­duc­tively past re­tire­ment age, which could be benefi­cial for in­tel­lec­tual and eco­nomic growth.

That said, the case for long-term im­pact here is highly spec­u­la­tive and many peo­ple think more anti-ag­ing re­search could be to­tally in­effec­tive (or per­haps even nega­tive). Anti-ag­ing re­search also might soon be able to draw sub­stan­tial pri­vate in­vest­ment, mean­ing it will be less ne­glected. But some have also ar­gued that’s a rea­son to work on it now, be­cause it may need some early suc­cesses be­fore it can be­come a self-sus­tain­ing field. Read more.

Im­prov­ing in­sti­tu­tions to pro­mote development

In­sti­tu­tional qual­ity seems to play a large role in de­vel­op­ment, so if there were a way to make im­prove­ments to in­sti­tu­tions in de­vel­op­ing coun­tries, this could be an effec­tive way to im­prove many peo­ple’s lives.

For in­stance, le­gal and poli­ti­cal changes in China seem to have been key to its eco­nomic de­vel­op­ment from the 80’s on­wards. For a dis­cus­sion of the im­por­tance of gov­ern­ing in­sti­tu­tions for eco­nomic growth see our in­ter­view with a group try­ing to found cities with im­proved le­gal in­fras­truc­ture in the de­vel­op­ing world.

Keep in mind, how­ever, these efforts are of­ten best pur­sued by cit­i­zens of the rele­vant coun­tries. There is also sub­stan­tial dis­agree­ment about which in­sti­tu­tions are best, and the an­swers will vary de­pend­ing on a coun­try’s cir­cum­stances and cul­ture.

Space set­tle­ment and terraforming

Ex­pand­ing to other planets could end up be­ing one of the most con­se­quen­tial things hu­man­ity ever does. It could greatly in­crease the num­ber of be­ings in the uni­verse and might re­duce the chance that we go ex­tinct by al­low­ing hu­mans to sur­vive deadly catas­tro­phes on earth. It may also have dra­matic nega­tive con­se­quences, for in­stance if we fail to take into ac­count the welfare of be­ings we cause to ex­ist in the pro­cess, or if set­tle­ment turns out to in­crease the risk of even­tual catas­trophic con­flict. (Read more.)

How­ever, in­de­pen­dent space colonies are likely cen­turies away, and there are more ur­gent challenges in the mean­time. As a re­sult, we think that right now re­sources are gen­er­ally bet­ter used el­se­where. Still, there does seem to be a chance that in the long run re­search on the ques­tion of whether space set­tle­ment is likely to be good or bad—and how good or bad—could have sig­nifi­cant im­pacts.

Lie de­tec­tion technology

Lie de­tec­tion tech­nol­ogy may soon see large im­prove­ments due to ad­vances in ma­chine learn­ing or brain imag­ing. If so, this might have sig­nifi­cant and hard-to-pre­dict effects on many ar­eas of so­ciety, from crim­i­nal jus­tice to in­ter­na­tional diplo­macy.

Bet­ter lie de­tec­tion tech­nol­ogy could im­prove co­op­er­a­tion and trust be­tween groups by al­low­ing peo­ple to prove they are be­ing hon­est in high-stakes sce­nar­ios. On the other hand, it might in­crease the sta­bil­ity of non-demo­cratic regimes by helping them avoid hiring, or re­move, any­one who isn’t a ‘true be­liever’ in their ide­ol­ogy.

Wild an­i­mal welfare

Wild an­i­mals are very nu­mer­ous, and they of­ten suffer due to star­va­tion, heat, par­a­sitism and other is­sues. Al­most no­body is work­ing to figure out what if any­thing can be done to help them, or even which an­i­mals are likely to be suffer­ing most. Re­search on in­ver­te­brates might be es­pe­cially im­por­tant, as there is such an enor­mous num­ber of them.

Learn more in our in­ter­view with Per­sis Eskan­der and read some early re­search from the Foun­da­tional Re­search In­sti­tute here.

Other global issues

We think the fol­low­ing is­sues are quite im­por­tant from a short- or medium- term per­spec­tive, and that work on them might well be as im­pact­ful as ad­di­tional work fo­cused on re­duc­ing the suffer­ing of an­i­mals from fac­tory farm­ing or im­prov­ing global health.

Men­tal health

Im­prov­ing men­tal health seems like one of the most di­rect ways of mak­ing peo­ple bet­ter off, and there ap­pear to be many promis­ing ar­eas for re­search and re­form that have not yet been ad­e­quately ex­plored—es­pe­cially with re­gard to new drug ther­a­pies and im­prov­ing men­tal health in the de­vel­op­ing world. See the Hap­pier Lives In­sti­tute for more.

There is also some chance that like eco­nomic growth, bet­ter men­tal health in a pop­u­la­tion could have pos­i­tive in­di­rect effects that ac­cu­mu­late over time. Read a pre­limi­nary re­view of this cause area and check out our pod­cast epi­sode with Spencer Green­berg to learn more.

Biomed­i­cal re­search and other ba­sic science

Ba­sic sci­en­tific re­search in gen­eral has had a large pos­i­tive effect on welfare his­tor­i­cally. Ma­jor break­throughs in biomed­i­cal re­search speci­fi­cally could lead to peo­ple liv­ing much longer, healthier lives. You might also be able to use train­ing in biomed­i­cal re­search to work on other promis­ing ar­eas dis­cussed above, like biose­cu­rity or anti-ag­ing re­search. Read more.

In­creas­ing ac­cess to pain re­lief in de­vel­op­ing countries

Most peo­ple lack ac­cess to ad­e­quate pain re­lief, which leads to wide­spread suffer­ing due to in­juries, chronic health con­di­tions, and dis­ease. One nat­u­ral ap­proach is in­creas­ing ac­cess to cheap pain re­lief med­i­ca­tions that are com­mon in de­vel­oped coun­tries, but of­ten not available in the de­vel­op­ing world. One group work­ing in this area is the Or­ga­ni­za­tion for the Preven­tion of In­tense Suffer­ing. Read more.

Other risks from cli­mate change

We dis­cuss ex­treme risks of cli­mate change—such as se­vere warm­ing and geopoli­ti­cal risks—in our writeup of the area.

Cli­mate change also threat­ens to cre­ate many smaller prob­lems or make other global prob­lems worse, for ex­am­ple fric­tions be­tween coun­tries due to move­ment of re­fugees. While com­pared to other ar­eas we cover cli­mate change is not as ne­glected, we are highly sup­port­ive of re­duc­ing car­bon-emis­sions through re­search, bet­ter tech­nol­ogy, and policy in­ter­ven­tions. Read more.

Smok­ing in the de­vel­op­ing world

Smok­ing takes an enor­mous toll on hu­man health – ac­count­ing for about 6% of all ill-health globally ac­cord­ing to the best es­ti­mates. This is more than HIV and malaria com­bined. De­spite this, smok­ing is on the rise in many de­vel­op­ing coun­tries as peo­ple be­come richer and can af­ford to buy cigarettes.

Pos­si­ble ap­proaches in­clude ad­vo­cat­ing for cigarette taxes or cam­paigns to dis­cour­age smok­ing, and de­vel­op­ment of e-cigarette tech­nol­ogy. Read more.

There is a lot to do.