I am an experienced interdisciplinary researcher and have focused on using computational methods to derive insights into biological systems. My academic research took me from collecting insects in tropical rainforests to imaging them in synchrotrons. I have become progressively more involved with the Effective Altruism community over several years and I am now aiming to apply my expertise in areas that more directly benefit society. To that end, I have recently redirected my research towards exploring novel technical countermeasures against viral pandemics.
gavintaylor
Many areas of science currently appear to have reproducibility problems with published research (some call it a crisis). Do you think that poor reproducibility of recent (approx. the last 30 years) scientific work has been a significant contributor to the current stagnation?
On the margin, do you think that funding is better spent on improving reproducibility (or more generally, the areas covered by Metascience) or on pursuing promising scientific research directly?
I’m generally in favour of experimenting with different granting models and am glad to hear that funders are starting to experiment with random allocation. However, I’d be a little bit cautious about moving to a system based solely on random grant assignment. Depending on the actual grant success rate per round (currently often <20%), it seems likely that one would get awarded grants quite infrequently, which would interrupt the continuity of research. For instance, if somebody gets a random grant and makes an interesting discovery, it seems silly to then expect to wait several years for another random grant assignment to follow up on it. So I feel that random assignment is probably better used for assigning funding for early-career researchers or pilot projects.
With respect to quality control, the Nature news article linked above notes:
assessment panels spend most of their time sorting out the specific order in which to place mid-ranking ideas. Low- and high-quality applications are easy to rank, she says. “But most applications are in the midfield, which is very big.”
The current modified lottery systems just remove the low-ranking applications, but if it’s easy to pick high-ranking applications, surely they should be given funding priority?
- Dec 9, 2020, 10:22 PM; 1 point) 's comment on The Intellectual and Moral Decline in Academic Research by (
This article on doing systematic reviews well might also be of interest if you want to refine your process to make a publishable review. It’s written by environmental researchers, but I think the ideas should be fairly general (i.e. they mention Cochrane for medical reviews).
I’d also recommend having a loot at Iris.ai. It is a bit similar to ConnectedPapers but works off a concept map (I think) rather than than a citation map, so it can discover semantic linkages between your paper of interest and others that aren’t directly connected through reference links. I’ve just started looking at it this week and have been quite impressed with the papers it suggested.
The idea of doing deliberate practice on research skills is great. I agree that learning to do good research is difficult and poor feedback mechanisms certainly don’t help. Which other skills are you aiming to practice?
Hey Fernando, wrt to your very final point.
Networking with Brazilian researchers conducting EA related research, specially x-risks and institutional decision-making improvement (we have already done some work on mapping them)
I recalled that Luis Mota and I briefly spoke about this at the EAGxV some months ago. We discussed a few points around avenues for academic EA work in Brazil and thought the following could be promising:
* Governance of AI and biotechnology. Brazil is doing a bit of research on both (more so on bio), and is likely to be a regional hub of applied work in these areas.
* Natural pandemics. Rainforest clearance could bring people into contact with all sorts of viruses.
* Conversely, rainforest preservation assists with climate change.
* Farmed animal welfare. Brazil farms a lot of animals and domestic consumption is quite high consumption compared to population income. Several ACE recommended charities already work here.
For the young academic, Brazilian Academia may also be quite attractive as it’s possible to get a permanent/tenured position at quite soon after your PhD via a concurso. This could then allow researchers to focus on work they view as valuable rather than having to chase high-impact publications for a decade to get a position, as is common in the US/EU. If one is mostly doing theoretical research and doesn’t need grants to do experimental research, then this could be a good position from which to do theoretical research on the above areas or meta-topics (e.g. cause prioritisation).
There are practical limitations about the resolution with which neurons can increase resolution (noise would be limiting factor, maybe other considerations). A common ‘design scheme’ that gets around this is range fractionation: If the receptors are endowed with distinct transfer functions in such a way that the points of highest sensitivity are scattered along the axis of the quality being measured, the precision of the sense organ as a whole can be increased.
This example of mechanosensory neural encoding in hawkmoths is a good example of range fractionation (and where I first heard about it).
Range fractionation is one common example where extra neurons increase resolution. There may be other ways that neural resolution can be increased without extra neurons. Also note that this has mostly been studied in peripheral sensory systems—I’m not sure if similar encoding schemes have been considered to represent the resolution of subjective experiences that are solely represented in the CNS.
A new update on this project—it has now grown into the Ethnicity and COVID-19 Research Consortium (ECRC). They have started to publish some work, which is available here, and Michelle and her colleagues are still looking for BAME people who have been affected to participate in their study here.
The consortia will also be presenting some initial results of their work in an online mini-conference on November 27th (7PM GMT). Please register here to attend.
It seems like this issue is now receiving more attention as well, as the Biden-Harris COVID-19 response plan includes a ‘COVID-19 Racial and Ethnic Disparities Task Force’. I expect the ECRC’s work could be used to give that Task Force a head start, and if anybody knows somebody who will be on the Task Force, I would be happy to connect them to Michelle and the ECRC team.
many people assumed that this was the scientific consensus. Unfortunately, this misrepresented the scientific community’s state of uncertainty about the risks of nuclear war. There have only ever been a small numbers of papers published about this topic (<15 probably), mostly from one group of researchers, despite the topic being one of existential importance.
...
We’re finally beginning to see some healthy debate about some of these questions in the scientific literature. Alan Robock’s group published a paper in 2007 that found significant cooling effects even from a relatively limited regional war. A group from Los Alamos, Reisner et al, published a paper in 2018 that reexamined some of the assumptions that went into Robock et al’s model, and concluded that global cooling was unlikely in such a scenario. Robock et al. responded, and Riesner et al responded to the response. Both authors bring up good points, but I find Rieser’s position more compelling. This back and forth is worth reading for those who want to investigate deeper.I’ve always found it a bit weird that so few researchers have work on such an important question. It’s good to hear the more researchers are now engaging with the nuclear winter modeling. Besides genuine scientific disagreements about the modeling, I wasn’t surprised to find that Wikipedia also notes there are some doubts about the emotional and political bias of the researchers involved:
As MIT meteorologist Kerry Emanuel similarly wrote a review in Nature that the winter concept is “notorious for its lack of scientific integrity” due to the unrealistic estimates selected for the quantity of fuel likely to burn, the imprecise global circulation models used, and ends by stating that the evidence of other models, point to substantial scavenging of the smoke by rain.[179] Emanuel also made an “interesting point” about questioning proponent’s objectivity when it came to strong emotional or political issues that they hold.[11]
I think that funding another group of climate modellers to conduct nuclear winter simulations independently of the Robock group would provide a valuable second perspective on this. Alternatively, an adversarial collaboration between the Robock group and some nuclear winter opponents could also produce valuable results.
This might be the first example I’ve seen of an Open Inverse Grant Proposal. Good luck!
The last newsletter from Spencer Greenberg/Clearer Thinking might be helpful:
There is a collection of pages about the ‘Kickstarter for coordinated action’ idea on LessWrong.
A friend of mine started Free our knowledge, which is intended to encourage collective action from academics to support open science initiatives (open access publishing, pre-registrations, etc.). The only enforcement is deanonymizing the pledge signatories after the threshold is reached (which hasn’t happened yet).
I recently attended the UNESCO Open Talks Webinar “Open Science for Building Resilience in the Face of COVID-19”, which touched on many of the ideas from the pre-print above. The webinar recording is available on YouTube, and I’ve also written up a short summary which can be accessed here. The WHO representative made it clear that they were in favour of Open Science and that it has assisted them in their work.
More generally, I think that Open Science is relevant to EAs from two perspectives. Firstly, it has the potential to reduce problems with and increase benefits from scientific research, which could have positive benefits for society. More directly, EA research often summarizes academic research and EAs should benefit if that is both (legally) freely accessible and also done more transparently. Although a lot of EA research is effectively published open-access (e.g. forum/blog posts) it could be also interesting to consider what other open science ideas can be incorporated into EA research.
I regard Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) as having been quite successful. From Wikipedia:
Notable developments by CSIRO have included the invention of atomic absorption spectroscopy, essential components of Wi-Fi technology, development of the first commercially successful polymer banknote, the invention of the insect repellent in Aerogard and the introduction of a series of biological controls into Australia, such as the introduction of myxomatosis and rabbit calicivirus for the control of rabbit populations.
And the items listed in the Innovation section. Still, I’m sure they have had (at least) a few research projects that didn’t go anywhere.
It would be an interesting case study on organisational effectiveness to compare the Fraunhofer Society to the Max Planck Society. Although they focus on different stages of research (applied innovation vs. basic science) they both German non-profit research organizations and relatively similar in size (quick google on MPS gives around 24 thousand staff and $2.1 billion budget for 2018). Yet MPS is a world-renowned research organization and its researchers have been awarded numerous Nobel prizes. I’m not sure if MPS has specific goals, but nonetheless, it seems to be achieving much more impact than Fraunhofer. Some of this difference is probably just in appearances as basic research tends to get more recognition and publicity than applied work, but it still seems like MPS is systematically doing better. Why is that?
---
Of course, it is not that the employees at Fraunhofer want to do harmful things. Many are cognitively dissonant, actually thinking that they do tremendous good. But many are aware of the problematic situation they are in. The dilemma is: Not having any goal-oriented incentive system, the Fraunhofer Society is dominated by the personal incentive of its members: Job security.
This is the same general trend I observed amongst a lot of University researchers, but it sounds like it’s progressed much further where you work. Careerism seems to kill the integrity of researchers.
---
When I told a senior scientist about CoolEarth, she replied:
“When it comes to climate change, we have to stop thinking in numbers”
When I asked her why, she said : “Because you can´t just throw a couple of dollars at the ground and ask mother nature to do it one more year”
This reminded me of The value of a life from the Minding Our Way sequence.
Nice write up. I’ve referenced the Rejuvenation Road Map on LEAF’s site several times, but never really knew much about the organisation itself.
Two extra points that I think would be interesting to ask about in the general questions on the landscape section:
-LEAF seems like they have a very good overview of the organisations already in ageing research (i.e. they raise funds for 9 others orgs). Is there any space in open space in the landscape that they would be excited about a new organisation being started to address?
-Do they view ageing research as primarily being talent or funding constrained? This could be separated into University and non-profit (e.g. SENS RF) based research, as I think the funding options available to each are quite different.
Good question. I did a quick google and came across Lisa Bero who seems to have done a huge amount of work on research integrity. From this popular article, it sounds like corporate funding is often problematic for the research process.
The article links to several systematic reviews her group has done, and the article ‘Industry sponsorship and research outcome’ does conclude that corporate funding leads to a bias in the published results:
Authors’ conclusions: Sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources. Our analyses suggest the existence of an industry bias that cannot be explained by standard ‘Risk of bias’ assessments.
I just read the abstract this so I’m not sure if they tried to identify if this was solely due to publication bias or if corporate-funded research also tended to have other issues (e.g. less rigorous experimental designs or other questionable research practices).
I was recently reading the book Subvert! by Daniel Cleather (a colleague) and thought that this quote from Karl Popper and the author’s preceding description of Popper’s position sounded very similar to EAs method of cause prioritisation and theory of change in the world. (Although I believe Popper is writing in the context of fighting against threats to democracy rather than threats to well-being, humanity, etc.) I haven’t read The Open Society and Its Enemies (or any of Popper’s books for that matter), but I’m now quite interested to see if he draws any other parallels to EA.
For the philosophical point of view, I again lean heavily on Popper’s The Open Society and Its Enemies. Within the book, he is sceptical of projects that seek to reform society based upon some grand utopian vision. Firstly, he argues that such projects tend to require the exercise of strong authority to drive them. Secondly, he describes the difficulty in describing exactly what utopia is, and that as change occurs, the vision of utopia will shift. Instead he advocates for “piecemeal social engineering” as the optimal approach for reforming society which he describes as follows:
“The piecemeal engineer will, accordingly, adopt the method of searching for, and fighting against, the greatest and most urgent evils of society, rather than searching for, and fighting for, its greatest ultimate good.”
I also quite enjoyed Subvert! And would recommend that as a fresh perspective on the philosophy of science. A key point from the book is:
The problem is that in practice, scientists often adopt a sceptical, not a subversive, stance. They are happy to scrutinise their opponents results when they are presented at conferences and in papers. However, they are less likely to be actively subversive, and to perform their own studies to test their opponents’ theories. Instead, they prefer to direct their efforts towards finding evidence in support of their own ideas. The ideal mode would be that the proposers and testers of hypotheses would be different people. In practice they end up being the same person.
I think this post is a good counterpoint to common adages like ‘don’t sweat the small stuff’ or ‘direction over speed’ that often come up in relation to career and productivity advice.
At the risk of making a very tenuous connection, this reminded me of an animal navigation strategy for moving towards a goal which has an unstable orientation (i.e. the animal is not able to reliably face towards the goal) - progress can still be made if it moves faster when facing towards the goal than away from it. (I don’t think this is a very well known navigation strategy, at least it didn’t seem to be in 2014 when I wrote up an experiment on this in my PhD thesis [Chapter 5]). Work is obviously a lot more multi-faceted than spatial navigation, but maybe an analogy could be made to school students or junior employees who don’t get much choice about what they are working on day to day and recommend that they go all out on the important things and just scrape by on the rest.
Preprint: Open Science Saves Lives: Lessons from the COVID-19 Pandemic
Michelle’s study is now searching for participants. If you are a Black, Asian from a minority ethnic group or a person of colour and interested in sharing your lived experience of COVID-19, contact her at: michelle.king-okoye@igdore.org
See more details here.
Are there any areas covered by the fund’s scope where you’d like to receive more applications?