Definitions
Cocosci... wait, what did you just say? I was amazed to discover the field of Computational Cognitive Science exists, especially at such an exciting time in its development. Cocosci is a new and growing field that is heavily interdisciplinary. I'll leave it to a sampling of experts to describe! (Gili Karni kindly updated the faculty list on this page in Oct 2020; Vael Gates wrote the original page in 2016.)
Josh Tenenbaum (Computational Cognitive Science Group at MIT)
"We study the computational basis of human learning and inference. Through a combination of mathematical modeling, computer simulation, and behavioral experiments, we try to uncover the logic behind our everyday inductive leaps: constructing perceptual representations, separating “style” and “content” in perception, learning concepts and words, judging similarity or representativeness, inferring causal connections, noticing coincidences, predicting the future.
We approach these topics with a range of empirical methods — primarily, behavioral testing of adults, children, and machines — and formal tools — drawn chiefly from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. Our work is driven by the complementary goals of trying to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners."
Tom Griffiths (Computational Cognitive Science Lab at Princeton)
"The basic goal of our research is understanding the computational and statistical foundations of human inductive inference, and using this understanding to develop both better accounts of human behavior and better automated systems for solving the challenging computational problems that people solve effortlessly in everyday life. We pursue this goal by analyzing human cognition in terms of optimal or "rational" solutions to computational problems. For inductive problems, this usually means developing models based on the principles of probability theory, and exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition. We test these models through experiments with human subjects, looking at how people solve a wide range of inductive problems, including learning causal relationships, acquiring aspects of linguistic structure, and forming categories of objects.
Probabilistic models provide a way to explore many of the questions that are at the heart of cognitive science. As rational solutions to a problem, they can indicate how much information an "ideal observer" might extract from the available data, and provide information about the nature of the constraints that are needed in order to guarantee good inductive inferences. By making it possible to associate discrete hypotheses with probabilistic predictions, they allow us to explore how statistical learning can be combined with structured representations. By enabling us to define models of potentially unbounded complexity, they can also be used to answer questions about how well the complexity of these structured representations is warranted by the data. Finally, the extensive literature on schemes for constructing computationally efficient approximations to probabilistic inference provides a source of clues as to psychological and neural mechanisms that could support inductive inference, and new experimental methods for collecting information about people's beliefs and inductive biases.
The working hypothesis that probability theory gives a formal account of human inductive inference establishes connections between cognitive science and current research in machine learning, artificial intelligence, and statistics. This means that probabilistic models of cognition can establish a route for ideas in these disciplines to be explored as explanations for how people learn, and for our investigation of human cognition to inform the development of new methods for making automated systems that learn."
Noah Goodman (Computation and Cognition Lab at Stanford)
"Our research aims to understand how richly structured knowledge about the environment is acquired, and how this knowledge aids adaptive behavior. We use a combination of behavioral, neuroimaging and computational techniques to pursue these questions.
One prong of this research focuses on how humans and animals discover the hidden states underlying their observations, and how they represent these states. In some cases, these states correspond to complex data structures, like graphs, grammars or programs. These data structures strongly constrain how agents infer which actions will lead to reward. A second prong of our research is teasing apart the interactions between different learning systems. Evidence suggests the existence of at least two systems: a 'goal-directed' system that builds an explicit model of the environment, and a 'habitual' system that learns state-action response rules. These two systems are subserved by separate neural pathways that compete for control of behavior, but the systems may also cooperate with one another."
Note: These professors do cocosci in the "inference" vein, which is the framework within which I encountered cocosci and am most familiar with. However, there are many other topics that can be considered computational cognitive science, so I urge you to check out the lab list below to see the breadth of the field!