I am an experienced interdisciplinary researcher and have focused on using computational methods to derive insights into biological systems. My academic research took me from collecting insects in tropical rainforests to imaging them in synchrotrons. I have become progressively more involved with the Effective Altruism community over several years and I am now aiming to apply my expertise in areas that more directly benefit society. To that end, I have recently redirected my research towards exploring novel technical countermeasures against viral pandemics.
gavintaylor
I’d be interested in knowing if other senses (sound, especially) are processed faster at the same time. It could be that for a reaching movement, our attention is focused primarily visually, and we only process vision faster.
I agree that this would be an interesting experiment. If selective attention is involved then I think it is also possible that other senses would be processed slower. Unfortunately, my impression is that comparatively limited work has been done on multi-sensory processing in human psychology.
Articles like this make me think there is some basis to this concern:
Coronavirus: Russia calls international concern over vaccine ‘groundless’
On Wednesday, Germany’s health minister expressed concern that it had not been properly tested.
”It can be dangerous to start vaccinating millions… of people too early because it could pretty much kill the acceptance of vaccination if it goes wrong,” Jens Spahn told local media.
”Based on everything we know… this has not been sufficiently tested,” he added. “It’s not about being first somehow—it’s about having a safe vaccine.”
This seems like a thorough consideration of the interaction of BCIs with the risk of totalitarianism. I was also prompted to think a bit about BCIs as a GCR risk factor recently and had started compiling some references, but I haven’t yet refined my views as much as this.
One comment I have is that risk described here seems to rely not just on the development of any type of BCI but on a specific kind, namely, relatively cheap consumer BCIs that can nonetheless provide a high-fidelity bidirectional neural interface. It seems likely that this type of BCI would need to be invasive, but it’s not obvious to me that invasive BCI technology will inevitably progress in that direction. Musk hint’s that Neuralink’s goals are mass-market, but I expect that regulatory efforts could limit invasive BCI technology to medical use cases, and likewise, any military development of invasive BCI seems likely to lead to equipment that is too expensive for mass adoption (although it could provide the starting point for commercialization). Although DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program does have the goal of developing high-fidelity non- or minimally-invasive BCIs; my intuition is at that they will not achieve their goal of reading from one million and writing to 100,000 neurons non-invasively, but I’m not sure about the potential of the minimally-invasive path. So one theoretical consideration is what percentage of a population needs to be thought policed to retain effective authoritarian control, which would then indicate how commercialized BCI technology would need to be before it could become a risk factor.
In my view, a reasonable way to steer BCIs development away from posing a risk-factor for totalitarianism would be to encourage the development of high-fidelity non-invasive and read-focused consumer BCI. While non-invasive devices are intrinsically more limited than invasive ones, if consumers can still be satisfied by their performance then it will reduce the demand to develop invasive technology. Facebook and Kernel already look like they are moving towards non-invasive technology. One company that I think is generally overlooked is CTRL-Labs (now owned by Facebook), who are developing an armband that acquires high-fidelity measurements from motor neurons—although this is a peripheral nervous system recording, users can apparently repurpose motor neurons for different tasks and even learn to control the activity of individual neurons (see this promotional video). As an aside, if anybody is interested in working on non-invasive BCI hardware, I have a project proposal for developing a device for acquiring high-fidelity and non-invasive central nervous system activity measurements that I’m no longer planning to pursue but am able to share.
The idea of BCIs that punish dissenting thoughts being used to condition people away from even thinking about dissent may have a potential loophole, in that such conditioning could lead people to avoid thinking such thoughts or it could simply lead them to think such thoughts in ways that aren’t punished. I expect that human brains have sufficient plasticity to be able to accomplish this under some circumstances and while the punishment controller could also adapt what it punishes to try and catch such evasive thoughts, it may not always have an advantage and I don’t think BCI thought policing could be assumed to be 100% effective. More broadly, differences in both intra- or inter-person thought patterns could determine how effective BCI is for thought policing. If a BCI monitoring algorithm can be developed using a small pool of subjects and then applied en masse, that seems much risky than if the monitoring algorithm needs to be adapted to each individual and possibly updated over time (though there would be scope for automating updating). I expect that Neuralinks future work will indicate how ‘portable’ neural decoding and encoding algorithms are between individuals.
I have a fun anecdotal example of neural activity diversity: when I was doing my PhD at the Queensland Brain Institute I did a pilot experiment for an fMRI study on visual navigation for a colleague’s experiment. Afterwards, he said that my neural responses were quite different from those of the other pilot participant (we both did the navigation task well). He completed and published the study and ask the other pilot participant to join other fMRI experiments he ran, but never asked me to participate again. I’ve wondered if I was the one who ended up having the weird neural response compared to the rest of the participants in that study… (although my structural MRI scans are normal, so it’s not like I have a completely wacky brain!)
The BCI risk scenario I’ve considered is whether BCIs could provide a disruptive improvement in a user’s computer-interface speed or another cognitive domain. DARPA’s Neurotechnology for Intelligence Analysts (NIA) program showed that an x10 increase in image analysis speed with no loss of accuracy, just using EEG (see here for a good summary of DARPAs BCI programs until 2015). It seems reasonable that somewhat larger speed improvements could be attained using invasive BCI, and this speed improvement would probably generalize to other, more complicated tasks. When advanced BCIs is limited to early adopters, could such cognitive advantages facilitate the risky development in AI or bioweapons by small teams, or give operational advantages to intelligence agencies or militaries? (happy to discuss or share my notes on this with anybody who is interested in looking into this aspect further)
Working together to examine why BAME populations in developed countries are severely affected by COVID-19
The call for science to be done in service to society reminds me of Nicholas Maxwell’s call to redirect academia to work towards wisdom rather than knowledge (see here and also here). I haven’t read any of Maxwell’s books on this, but it surprises me that there doesn’t seem to be any interaction between him and EA philosophers at other UK institutes as Maxwell’s research seems to be generally EA aligned (although limited to the broad-meta level).
Although not really a field, Nassim Taleb’s book Antifragile springs to mind—I haven’t read this myself but have seen it referenced in several discussion on economic fragility, so it might at least be a starting point to work with.
We are seeking additional recommendations for charities that operate in Latin America and the Arabian Peninsula, particularly in the areas of direct aid (cash transfers) and strengthening health systems.
Doe direto was running a trial to give cash transfers to vulnerable families in Brazil. They seemed to have finished the trial now and I’m not sure if/when they will consider restarting it.
Thanks Michael, I had seen that but hadn’t looked at the links. Some comments:
The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn’t talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):
‘Grey goo’ is a proposed scenario in which tiny self-replicating machines outcompete organic life and rapidly consume the earth’s resources in order to make more copies of themselves.40 According to Dr. Drexler, a grey goo scenario could not happen by accident; it would require deliberate design.41 Both Drexler and Phoenix have argued that such runaway replicators are, in principle, a physical possibility, and Phoenix has even argued that it’s likely that someone will eventually try to make grey goo. However, they believe that other risks from APM are (i) more likely, and (ii) very likely to be relevant before risks from grey goo, and are therefore more worthy of attention.42 Similarly, Prof. Jones and Dr. Marblestone have argued that a ‘grey goo’ catastrophe is a distant, and perhaps unlikely, possibility.43
OPP’s discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:
First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work …
Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields …
Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded …
It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.
Something that I think EAs may be undervaluing is scientific research done with the specific aim of identifying new technologies for mitigating global catastrophic or existential risks, particularly where these have interdisciplinary origins.
A good example of this is geoengineering (the merger of climate/environmental science and engineering) which has developed strategies that could allow for mitigating the effects of worst-case climate change scenarios. In contrast, the research being undertaken to mitigate worst-case pandemics seem to focus on developing biomedical interventions (biomedicine started as an interdisciplinary field, although it is now very well established as its own discipline). As an interdisciplinary scientist, I think there is likely to be further scope for identifying promising interventions from the existing literature, conducting initial analysis and modelling to demonstrate these could be feasible responses to GCRs, and then engaging in field-building activities to encourage further scientific research along those paths. The reason I suggest focusing on interdisciplinary areas is that merging two fields often results in unexpected breakthroughs (even to researchers from the two disciplines involved in the merger) and many ‘low-hanging’ discoveries that can be investigated relatively easily. However, such a workflow seems uncommon both in academia (which doesn’t strongly incentivise interdisciplinary work or explicitly considering applications during early-stage research) and EA (which [with the exception of AI Safety] seems to focus on finding and promoting promising research after it has already been initiated by mainstream researchers).
Still, this isn’t really a career option as much as it is a strategy for doing leveraged research which seems like it would be better done at an impact focused organisation than at a University. I’m personally planning to use this strategy and will attempt to identify and then model the feasibility of possible antiviral interventions as the intersection of physics and virology (although I haven’t yet thought much about how to effectively promote any promising results).
It could also be the case that the impact distribution of orgs is not flat yet we’ve only discovered a subset of the high impact ones so far (speculatively, some of the highest impact orgs may not even exist yet). So if the distribution of applicants is flatter then they are still likely to satisfy the needs of the known high impact orgs and others might end up finding or founding orgs that we later recognise to be high impact.
Sure, I agree that unvetted UBI for all EAs probably would not be a good use of resources. But I also think there are cases where an UBI-like scheme that funded people to do self directed work on high-risk projects could be a good alternative to providing grants to fund projects, particularly at the early-stage.
Asking people who specialise in working on early-stage and risky projects to take-care of themselves with runway may be a bit unreasonable. Even if a truly risky project (in the low-probability of a high-return sense) is well executed, we should still expect it to have an a priori success rate of 1 in 10 or lower. Assuming that it takes six months or so to test the feasibility of a project, then people would need save several years worth of runway if they wanted to be financially comfortable while continuing to pursue projects until one worked out (of course, lots of failed projects may be an indication that they’re not executing well, but lets be charitable and assume they are). This would probably limit serious self-supported EA entrepreneurship to an activity one takes on at a mid-career or later stage (also noted by OPP in relation to charity foundation):
Starting a new company is generally associated with high (financial) risk and high potential reward. But without a solid source of funding, starting a nonprofit means taking high financial risk without high potential reward. Furthermore, some nonprofits (like some for-profits) are best suited to be started by people relatively late in their careers; the difference is that late-career people in the for-profit sector seem more likely to have built up significant savings that they can use as a cushion. This is another reason that funder interest can be the key factor in what nonprofits get started.
At the moment I think there aren’t obvious mechanisms to support independent early-stage and high-risk projects at the point where they aren’t well defined and, more generally, to support independent projects that aren’t intended to lead to careers.
As an example that address both points, one of the highest impact things that I’m considering working on currently is a research project that could either fail in ~3 months or, if successful, occupy several years of work to develop into a viable intervention (with several more failure points along the way).
With regards to point 1: At the moment, my only option seems to be applying for seed-funding, doing some work and if that its successful, applying to another funder to provide longer-term project funding (probably on several occasions). Each funding application is both uncertain and time consuming, and knowing this somewhat disincentives me from even starting (although I have recently applied for seed stage funding). Having a funding format that started at project inception and could be renewed several times would be really helpful. I don’t think something like this currently exists for EA projects.
With regards to point 2: As a researcher, I would view my involvement with the project as winding down if/when it lead to a viable intervention—while I could stay involved as a technical advisor, I doubt I’d contribute much after the technology is demonstrated, nor do I imagine particularly wanting to be involved in later stage activities such as manufacturing and distribution. This essentially means that the highest impact thing I can think of working on would probably need my involvement for, at most, a decade. If it did work out then I’d least have some credibility to get support for doing research in another area, but taking a gamble on starting something that won’t even need your involvement after a few years hardly seems like sound career advice to give (although from the inside view, it is quite tempting to ignore that argument against doing the project).
I think that lack of support in these areas is most relevant to independent researchers or small research teams—researchers at larger organisations probably have more institutional support when developing or moving between projects, while applied work, such as distributing an intervention, should be somewhat easier to plan out.
I haven’t seen the talk yet, but tend to agree that industrial ideas and technology were probably exported very quickly after their development in Europe (and later the US), which probably displaced any later and independent industrial revolution.
I think it’s also worth noting that the industrial revolution occurred after several centuries of European colonial expansion, during which material wealth was being sent back to Europe. For example, in the 300 hundred years before the industrial revolution, American colonies accounted for >80% of the worlds silver production. So considering the Industrial Revolution to simply have been a European phenomena could be substantially understating the more global scope of the material contribution that may have facilitated it. However, it’s hard to know if colonial wealth was required to create the right conditions for an industrial revolution or simply helped to speed it up. (Interestingly, China was going on successful voyages of discovery in the 14th century but had apparently abandoned their navy by the early 15th century. If China had instead gone on to start colonial activities around the same time as Europe, maybe Eastern industry would have started developing before the Western industrial tradition was imported.)
Guns, Germs, and Steel—I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.
In Gun, Germs and Steel, Diamond comments briefly on technological stagnation and regression in small human populations (mostly in relation to Australian aborigines). I don’t know if there is much theoretical basis for this, but he suggests that it is likely that the required population size to support even quite basic agricultural technology is much larger than the minimum genetically viable population.
So even if knowledge isn’t explicitly destroyed in a catastrophe, if humanity is reduced to small groups of subsistence farmers then it seems probable that the technological level they can utilize will be much lower than that of the preceding society (although probably higher than the same population level without a proceeding society). The lifetime of unmaintained knowledge is also limiting factor—books and digital media may degrade before the new civilisation is ready to make use of them (unless they plan ahead to maintain them). But I agree that this is all very speculative.
I think this needs clarifying: the probability of getting industry conditional on already having agriculture may be more likely than the probability of getting agriculture in the first place, but as agriculture seems to be necessary for industry, the total likelihood of getting industry is almost certainly lower than that of getting agriculture (i.e. most of the difficulty in developing an industrial society may be in developing that preceding agricultural society).
Would policies to manage orbital space debris be a good candidate for short-term work in this area, particularly if they can be directed at preventing the tail risk scenarios such as run-away collision cascades (Kurzgesagt has a cute video on this)? Although larger pieces of space debris are tracked and there are some efforts currently being taken to test debris removal methods, it seems like this could be suffer from free-rider problem in the same way international climate change policy does (i.e. a lot countries are scaling up their space programs, but most may rely on the US to take the lead on debris management).
In the event that there is a collision cascade it also seems like it could create a weak form of the future trajectory lock-in scenario that Ord describes in the Precipice, in that humanity would be ‘locked-out’ of spacefaring (and satellite usage) for as long as it took to clean up the junk or until enough of it naturally fell out of orbit (possibly centuries).
Here is just linking to this post, I think you meant to link somewhere else?
Nice article Jason. I should start by saying that as a (mostly former) visual neuroscientist, I think that you’ve done quite a good job summarizing the science available in this series of posts, but particularly in these last two posts about time. I have a few comments that I’d like to add.
Before artificial light sources, there weren’t a lot of blinking lights in nature. So although visual processing speed is often measured as CFF, most animals didn’t really evolve to see flickering lights. In fact, I recall that my PhD supervisor Srinivasan did a study where he tried to behaviorally test honeybee CFF—he had a very hard time training them to go to flickering lights (study 1), but had much more success training them to go to spinning disks (study 2). In fact, the CFF of honeybees is generally accepted to be around 200 Hz, off the charts! That said, in an innate preference study on honeybees that I was peripherally involved with, we found honeybee had preferences for different frequencies of flickering stimuli, so they certainly can perceive and act on this type of visual information (study 3).
Even though CFF has been quite widely measured, if you wanted to do a comprehensive review of visual processing speed in different taxa then it would also be worth looking at other measures, such as visual integration time. This is often measured electrophysiologically (perhaps more commonly than CFF), and I expect that integration time will be at tightly correlated with CFF and as they are causally related, one can probably be approximately calculated from the other (I say approximately because neural nonlinearities may add some variance, in the case of a video system it can be done exactly). For instance, this study on sweat bees carefully characterized their visual integration time at different times of day and different light conditions but doesn’t mention CFF.
Finally, I think some simple behavioural experiments could shed a lot of light on how we expect metrics around sensory (in this case visual) processing speeds to be related to the subjective experience of time. For instance, the time taken to make a choice between options is often much longer than the sensory processing time (e.g. 10+ seconds for bumblebees, which I expect have CFF above 100 Hz), and probably reflects something more like the speed of a conscious process than the sensory processing speed alone does. A rough idea for an experiment is to take two closely related and putatively similar species where one had double the CFF of the other, measure the decision time of each on a choice-task to select flicker or motion at 25%, 50% and 100% of their CFF. So if species one has CFF at 80 Hz, test it on 20, 40 and 80 Hz, and if species two has CFF 40 Hz, test it on 10, 20 and 40 Hz. A difference in the decisions speed curve across each animals frequency range would be quite suggestive of a difference in the speed of decision making that was independent of the speed of stimulus perception. The experiment could also be done on the same animal in two conditions where its CFF differed, such as in a light- or dark-adapted state. For completeness, the choice-task could be compared to response times in a classical conditioning assay, which seems more reflexive, and I’d expect differences in speeds here correlate more tightly to differences in CFF. The results of such experiments seem like they could inform your credences on the possibility and magnitude of subjective time differences between species.