EA, Psychology & AI Safety Research

Given the enormous role, and immeasurable impact, the field of Psychology has played in the invention and development of Artificial Intelligence, I’m surprised at how little conversation there is about Psychology, and its related sub-disciplines.

So, to rectify that, I thought it might be helpful to list some historic, as well as current, AI researchers who also have degrees in the fields of Psychology. There is so much ongoing research in these fields, so this is not meant to be a comprehensive list. My apologies to anyone I might have missed. Also, this is not an endorsement of anyone’s particular work.

Hopefully, this list will give you some ideas, (especially when attempting to solve the ELK problem) and help along in your journeys to “Doing Good Better.”

(Compiled from various sources including APA, and History of AI and Psychology)

David Marr(d), BS Mathematics, PhD Theoretical Neuroscience, AI Lab/​Psych, MIT

David Marr (19 January 1945 – 17 November 1980) was a British neuroscientist and physiologist. Marr integrated results from psychology, artificial intelligence, and neurophysiology into new models of visual processing. His work was very influential in computational neuroscience and led to a resurgence of interest in the discipline.

In 1973, he joined the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology as a visiting scientist, taking on a faculty appointment in the Department of Psychology in 1977, where he was made a tenured full professor in 1980. Marr’s work provided solid proof that a good theory in behavior and brain sciences need not have to trade off mathematical rigor for faithfulness to specific findings. More importantly, it emphasized the role of explanation over and above mere curve fitting, making it legitimate to ask why a particular brain process is taking place, and not merely what differential equation can describe it. https://​​shimon-edelman.github.io/​​marr/​​marr.html

Geoffrey Hinton, BA Experimental Psychology, Ph.D., Artificial Intelligence (1978)

Since 2013, he divides his time working for Google Brain and the University of Toronto. In 2017, he co-founded and became Chief Scientific Advisor of the Vector Institute in Toronto.

With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularized the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach. Hinton is viewed as a leading figure in the deep learning community.

Hinton received the 2018 Turing Award, together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning,” and have continued to give public talks together.

Joshua B. Tenenbaum, PhD, Brain and Cognitive Sciences, MIT

Professor of computational cognitive science, p.i. Computer Science and AI Lab; studies learning, reasoning, and perception in humans and machines. Has pioneered accounts of human cognition based on sophisticated probabilistic models and developed several novel machine learning algorithms inspired by human learning, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. His current work focuses on understanding how people come to be able to learn new concepts from very sparse data—how we “learn to learn”—and on characterizing the nature and origins of people’s intuitive theories about the physical and social worlds. Recipient of early career awards from the Society for Mathematical Psychology, the Society of Experimental Psychologists, and the American Psychological Association, along with the Troland Research Award from the National Academy of Sciences.

Tomer D. Ullman, B.Sc, Cognitive Sci./​Physics, Hebrew U, Ph.D., Brain and Cognitive Sciences, MIT, postdoc associate, Center for Brains, Minds, and Machines.

Cognitive scientist interested in common-sense reasoning, and building computational models for explaining high-level cognitive processes and the acquisition of new knowledge by children and adults. In particular, he is focused on how children and adults come to form intuitive theories of agents and objects, and providing both a functional and algorithmic account of how these theories are learned. Such an account would go a long way towards explaining the basics cogs and springs of human intelligence, and support the building of more human-like artificial intelligence.

Brendan M Lake, Ph.D., Cog. Science, MIT, M.S., B.S., Symbolic Systems, Stanford

In 2017, Brenden Lake started the Human & Machine Learning Lab at NYU, at the Center for Data Science, also part of the NYU Department of Psychology, the larger NYU AI group, CILVR lab, and the Computational Cognitive Science community at NYU. We study human cognitive abilities that elude the best AI systems. Our current focus includes concept learning, compositional generalization, question asking, goal generation, and abstract reasoning. Our technical focus includes neuro-symbolic modeling and learning “through the eyes of a child” on developmentally-realistic datasets. By studying distinctively human endeavors, there is opportunity to advance both cognitive science and AI. In cognitive science, if people have abilities beyond the reach of algorithms, then we do not fully understand how these abilities work. In AI, these abilities are important open problems with opportunities to reverse-engineer the human solutions.

Samuel Gershman, B.A., Neuroscience, Columbia, Ph.D. Psychology, Princeton

Founder, The Gershman Lab—Research aims to address the mystery of how our brains acquire richly structured knowledge about our environments, and how this knowledge helps us learn to predict and control reward by using a combination of behavioral, neuroimaging and computational techniques to pursue these questions. One prong of our research focuses on how humans and animals discover the hidden states underlying their observations, and how they represent these states. In some cases, these states correspond to complex data structures, like graphs, grammars or programs. These data structures strongly constrain how agents infer which actions will lead to reward.

A second prong of our research is teasing apart the interactions between different learning systems. Evidence suggests the existence of at least two systems: a “goal-directed” system that builds an explicit model of the environment, and a “habitual” system that learns state-action response rules. Separate neural pathways that compete for control of behavior subserve these two systems, but the systems may also cooperate with one another.

Linda Smith, PhD Developmental Psychologist, AI Researcher, Indiana University

There has always been a deep connection between psychology and AI, says Linda Smith, PhD, a developmental psychologist and AI researcher at Indiana University Bloomington.

Now, some in the machine learning field are looking to psychological research on human learning and cognition to help take AI to that next level. They posit that by understanding how humans learn and think—and expressing those insights mathematically—researchers can build machines that are able to think and learn more like people do. https://​​www.apa.org/​​monitor/​​2018/​​04/​​cover-thinking-machine

She has some excellent insights on cognitive development in babies, and how humans acquire implicit knowledge, or “latent knowledge.” https://​​cogdev.sitehost.iu.edu

Noah Goodman, PhD Professor of Psychology and Computer Science, Stanford

“Humans are the most intelligent system we know,” says Noah Goodman, PhD, a professor of psychology and computer science at Stanford University who studies human reasoning and language. “So I study human cognition, and then I put on an engineering hat and ask, ‘How can I build one of those?’”

Alison Gopnik, PhD Developmental Psychology, works with AI Researchers

Professor of psychology and affiliate professor of philosophy at the University of California at Berkeley, where she has taught since 1988. She received her BA from McGill University and her PhD from Oxford University. She is a world leader in cognitive science, particularly the study of children’s learning and development.

Alison works on the latest (2019) big DARPA research project, Machine Common Sense. DARPA is funding collaborations between child psychologists and computer scientists. Read more here, from the Wall Street Journal.

At UC Berkeley they are building a system called, “MESS” short for Model-Building, Exploratory, Social Learning System. These are the elements that are the secret of babies’ success and that have largely been missing from current AIs.

Matthew Botvinick, PhD Cognitive Scientist, DeepMind

In fact, according to Matthew Botvinick, PhD, a cognitive scientist and the director of neuroscience research at DeepMind, AI systems are moving in the direction of deep neural networks that can build their own mental models of the sort that currently must be programmed in by humans.

Gary Marcus, PhD Cognitive Science, MIT, AI Researcher, Founder Robust.ai

Co-author: Rebooting AI: Building Artificial Intelligence We Can Trust

Professor of Psychology and Neural Science at NYU, Marcus approaches AI from a behavioral perspective. He says, for instance, that there’s a huge bias in machine learning that everything is learned and nothing is innate, which ignores human instincts and brain biology. In order for machines to form goals, determine outcomes, and problem-solve, algorithms need to emulate human learning processes much more accurately.

2019 Interview with Lex Fridman

https://​​youtu.be/​​vNOTDn3D_RI

2022 Noam Chomsky and GPT3 - The Road to AI We Can Trust

https://​​garymarcus.substack.com/​​p/​​noam-chomsky-and-gpt-3?s=r

9. Akshay Jagadeesh , PhD Student, Stanford, BS CS/​Cognitive Science, UC Berkeley

Research: Analyzing artificial neural networks, understanding what computations the human brain performs to give rise to perception. Teaching: Helped design and teach several courses at UC Berkeley and Stanford ranging from computer vision to neurobiology to the science of meditation. Currently at Inpirit.ai

10. Noam Chomsky, Father of Linguistics, Cognitive and Computational Psychology

Noam Chomsky’s work has greatly influenced artificial intelligence. Chomsky helped overturn the long-standing paradigm of behaviorism—That human behavior can be reduced to the links between actions and their subsequent rewards/​punishment. The main proponent of this theory, B.F. Skinner attempted to explain linguistics through behaviorism.

Skinner’s approach stressed the historical associations between a stimulus and the animal’s response—an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past.

But Chomsky focused less on the ‘action-reward system’ and more on gene-based modules in our brain that when evolved and put together, morphed into an intricate computational system whose output was language. This idea of ‘behaviorism’ could not explain nearly as well as Chomsky’s theories, the richness and variety of linguistics, how creatively we use it, and the speed and skill with which children pick it up with next to no exposure in the environment.

Chomsky’s insistence that language, like the visual or auditory system, was an inherent biological mechanism and should be studied thus was an important breakthrough in terms of developing talking, conversing machines. Chomsky and MIT neuroscientist David Marr also developed a framework for understanding and studying complex biological systems through three levels.

Computational level. The first level is the computational level, which is essentially understanding the input and output of the system. This defines the specific task or function the system is performing. In the case of the visual system, the input would be the image projected onto the retina, and the output would the the brain’s corrected, coherent result.

Algorithmic level. The algorithmic level describes the mechanisms that lead from input to output, which is arguably the most important for understanding how a complex biological system works. For example, it might describe how the image projected on the retina is processed by the brain to achieve the output of the computational level.

Implementation level. The implementation level is the last of this three-step method, and basically examines the way our biological hardware goes about implementing the mechanisms described in the algorithmic level.

But Chomsky’s contributions to AI are matched by his complaints about the field and the current direction in which it is headed. He feels that AI has adopted a more complicated version of behaviorism, placing a heavy emphasis on data mining and statistical techniques to pick apart masses of data. Admittedly, it has practical value, but he argues that it is unlikely to reveal insights about human nature or cognition, and is “inadequate and shallow.”

We wouldn’t have taught the computer much about what the phrase “physicist Sir Isaac Newton” really means, even if we can build a search engine that returns sensible hits to users who type the phrase in. ~ Summary by Arjun Mani, 2013