Bridging AI and Cognitive Science (BAICS)

International Conference on Learning Representations (ICLR)

April 26, 2020

Addis Ababa, Ethiopia

Cognitive science and artificial intelligence (AI) have a long-standing shared history. Early research in AI was inspired by human intelligence and shaped by cognitive scientists (e.g., Elman, 1990; Rumelhart and McClelland, 1986). At the same time, efforts in understanding human learning and processing used methods and data borrowed from AI to build cognitive models that mimicked human cognition (e.g., Anderson, 1975; Tenenbaum et al., 2006; Lieder & Griffiths, 2017; Dupoux, 2018). In the last five years the field of AI has grown rapidly due to the success of large-scale deep learning models in a variety of applications (such as speech recognition and image classification). Interestingly, algorithms and architectures in these models are often loosely inspired by natural forms of cognition (such as convolutional architectures and experience replay; e.g. Hassabis et al., 2017). In turn, the improvement of these algorithms and architectures enabled more advanced models of human cognition that can replicate, and therefore enlighten our understanding of, human behavior (Yamins & DiCarlo, 2016; Fan et al., 2018; Banino et al., 2018; Bourgin et al., 2019). Empirical data from cognitive psychology has also recently played an important role in measuring how current AI systems differ from humans and in identifying their failure modes (e.g., Linzen et al., 2016; Lake et al., 2017; Gopnik, 2017; Nematzadeh et al., 2018; Peterson et al., 2019; Hamrick, 2019).

The recent advancements in AI confirm the success of a multidisciplinary approach inspired by human cognition. However, the large body of literature supporting each field makes it more difficult for researchers to engage in multidisciplinary research without collaborating. Yet, outside domain-specific subfields, there are few forums that enable researchers in AI to actually connect with people from the cognitive sciences and form such collaborations. Our workshop aims to inspire connections between AI and cognitive science across a broad set of topics, including perception, language, reinforcement learning, planning, human-robot interaction, animal cognition, child development, and reasoning.

Important Dates and Links

Submission site opens January 3, 2020
Submission site
Submission deadline February 5, 2020, 6pm Pacific
Reviewing starts February 14, 2020, 6pm Pacific
Reviews due February 21, 2020, 6pm Pacific
Decisions announced February 25, 2020, 6pm Pacific
Day of workshop April 26, 2020

Call for Papers

We are sourcing short, four-page papers from cognitive science, neuroscience, and AI across two tracks.

The Research track will highlight advances at the intersection of AI, cognitive science, and neuroscience. We welcome submissions on:

  • AI systems which draw inspiration from natural intelligence,
  • Computational models of natural intelligence that leverage methods from AI,
  • Demonstrations of failure cases of AI systems where humans or animals succeed,
  • Datasets or environments for AI systems which target a particular aspect of natural intelligence,
  • Papers which address privacy and fairness in machine learning, using insights from the behavioral sciences, or
  • Works which otherwise combine AI and cognitive science.

The Blue Sky Ideas track will showcase longer-term ideas or positions on ways to connect AI with cognitive science. While we will not require such ideas to be fully instantiated in a working system, we do expect them to be grounded in empirical or computational evidence. We welcome submissions on:

  • Aspects of cognition which are under-explored in AI,
  • The order in which the research community should solve cognitive tasks in AI,
  • How research in AI could inform research in cognitive science,
  • Survey papers which summarize one area of research in AI or cognitive science for researchers in the other field, or
  • Works which otherwise propose a longer-term idea or position about the intersection of cognitive science and AI.

We particularly encourage submissions from students from groups that are underrepresented at machine learning conferences, including: gender, gender identity, sexual orientation, race, ethnicity, nationality, disability, and institution. We are pleased to be able to offer a limited number of travel grants to student presenters from these groups; if you would like to apply for financial assistance, please indicate this when submitting your paper.

Submission Policy:

  • Submissions should be a maximum of four pages, plus any number of pages for references and supplementary material. We ask authors to use the supplementary material only for minor details that do not fit in the main paper.
  • Papers should be fully anonymized for double-blind review.
  • Papers should use the ICLR style file.
  • Dual submission policy: we welcome submissions that have been published or are currently under review at other venues, including both full conference papers and workshops. However, all submissions should be shortened to four pages.
  • Evaluation criteria: Papers will be reviewed for topicality, clarity, correctness, and novelty; however, we welcome submissions which are still work-in-progress. Papers which are longer than four pages, clearly off-topic, or not anonymized will be rejected without review.
  • Submission site:


To be announced!

Invited Speakers

Yejin Choi is an Associate Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her recent research focus on commonsense knowledge and reasoning, neural language generation, and language grounding with vision. She was a recepient of Borg Early Career Award (BECA) in 2018, among the IEEE’s AI Top 10 to Watch in 2015, a co-recipient of the Marr Prize at ICCV 2013, and a faculty advisor for the Sounding Board team that won the inaugural Alexa Prize Challenge in 2017.
Judy Fan is an Assistant Professor of Psychology at the University of California, San Diego. The goal of her research is to reverse engineer the human cognitive toolkit, especially how people use physical representations of thought to learn, communicate, and create. Towards this end, her lab employs converging approaches from cognitive science, computational neuroscience, and artificial intelligence.
Leslie Kaelbling is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot.
Maithilee Kunda is an Assistant Professor of Computer Science and Computer Engineering at Vanderbilt University. Her work in artificial intelligence, in the area of cognitive systems, looks at how visual thinking contributes to learning and intelligent behavior, with a focus on applications for individuals on the autism spectrum.
Laura Schulz is a Professor of Cognitive Science MIT, where she is the Principal Investigator of the Early Childhood Cognition Lab. She studies how our commonsense understanding of the physical and social world is constructed during early childhood by investigating 1) how children infer the concepts and causal relations that enable them to engage in accurate prediction, explanation, and intervention; 2) the factors that support curiosity and exploration, allowing children to engage in effective discovery and 3) how these abilities inform and interact with social cognition to support intuitive theories of the self and others.
Kimberly Stachenfeld is a neuroscientist at DeepMind studying computational neuroscience and machine learning. Her research focuses on (1) the neural mechanisms for learning relational structure in service of efficient reinforcement learning and (2) how to get machines to do something similar.




  • Anderson, J. R. (1975). Computer simulation of a language acquisition system: A first report. In R. Solso (Ed.). Information processing and cognition. Hillsdale, N.J.: Lawrence Erlbaum.
  • Banino, A., Barry, C., Uria, B., Blundell, C., Lillicrap, T., Mirowski, P., ... & Wayne, G. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705), 429.
  • Bourgin, D., Peterson, J., Reichman, D., Griffiths, T., & Russell, S. (2019). Cognitive model priors for predicting human decisions. In International Conference on Machine Learning (pp. 5133-5141).
  • Dupoux, E. (2018). Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner. Cognition, 173, 43-59.
  • Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211.
  • Fan, J. E., Yamins, D. L., & Turk‐Browne, N. B. (2018). Common object representations for visual production and recognition. Cognitive science, 42(8), 2670-2698.
  • Gopnik, A. (2017). Making AI more human. Scientific American, 316(6), 60-65.
  • Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258.
  • Hamrick, J. (2019). Analogues of mental simulation and imagination in deep learning. Current Opinion in Behavioral Sciences, 29, 8-16
  • Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40.
  • Lieder, F., & Griffiths, T. L. (2017). Strategy selection as rational metareasoning. Psychological Review, 124(6), 762-794.
  • Linzen, T., Dupoux, E., & Goldberg, Y. (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4, 521-535.
  • Nematzadeh, A., Burns, K., Grant, E., Gopnik, A., & Griffiths, T. (2018). Evaluating theory of mind in question answering. EMNLP.
  • Peterson, J., Battleday, R., Griffiths, T. L., & Russakovsky, O. (2019). Human uncertainty makes classification more robust. Proceedings of the IEEE International Conference on Computer Vision.
  • Rumelhart, D. E., McClelland, J. L., & PDP Research Group. (1986). Parallel distributed processing.
  • Tenenbaum, J. B., Griffiths, T. L., & Kemp, C. (2006). Theory-based Bayesian models of inductive learning and reasoning. Trends in cognitive sciences, 10(7), 309-318.
  • Yamins, D. L., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3), 356.