Bridging AI and Cognitive Science (BAICS)

International Conference on Learning Representations (ICLR)

April 26, 2020

@baicsworkshop · #BAICS2020


Cognitive science and artificial intelligence (AI) have a long-standing shared history. Early research in AI was inspired by human intelligence and shaped by cognitive scientists (e.g., Elman, 1990; Rumelhart and McClelland, 1986). At the same time, efforts in understanding human learning and processing used methods and data borrowed from AI to build cognitive models that mimicked human cognition (e.g., Anderson, 1975; Tenenbaum et al., 2006; Lieder & Griffiths, 2017; Dupoux, 2018). In the last five years the field of AI has grown rapidly due to the success of large-scale deep learning models in a variety of applications (such as speech recognition and image classification). Interestingly, algorithms and architectures in these models are often loosely inspired by natural forms of cognition (such as convolutional architectures and experience replay; e.g. Hassabis et al., 2017). In turn, the improvement of these algorithms and architectures enabled more advanced models of human cognition that can replicate, and therefore enlighten our understanding of, human behavior (Yamins & DiCarlo, 2016; Fan et al., 2018; Banino et al., 2018; Bourgin et al., 2019). Empirical data from cognitive psychology has also recently played an important role in measuring how current AI systems differ from humans and in identifying their failure modes (e.g., Linzen et al., 2016; Lake et al., 2017; Gopnik, 2017; Nematzadeh et al., 2018; Peterson et al., 2019; Hamrick, 2019).

The recent advancements in AI confirm the success of a multidisciplinary approach inspired by human cognition. However, the large body of literature supporting each field makes it more difficult for researchers to engage in multidisciplinary research without collaborating. Yet, outside domain-specific subfields, there are few forums that enable researchers in AI to actually connect with people from the cognitive sciences and form such collaborations. Our workshop aims to inspire connections between AI and cognitive science across a broad set of topics, including perception, language, reinforcement learning, planning, human-robot interaction, animal cognition, child development, and reasoning.

Sponsors

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Organizers

Program Committee

Adam Marblestone Aishwarya Agrawal Andrea Banino
Andrew Jaegle Anselm Rothe Ari Holtzman
Bas van Opheusden Ben Peloquin Bill Thompson
Charlie Nash Danfei Xu Emin Orhan
Erdem Biyik Erin Grant Jon Gauthier
Josh Merel Joshua Peterson Kelsey Allen
Kevin Ellis Kevin McKee Kevin Smith
Leila Wehbe Lisa Anne Hendricks Luis Piloto
Mark Ho Marta Halina Marta Kryven
Matthew Overlan Max Kleiman-Weiner Maxwell Forbes
Maxwell Nye Michael Chang Minae Kwon
Pedro Tsividis Peter Battaglia Qiong Zhang
Raphael Koster Richard Futrell Robert Hawkins
Sandy Huang Stephan Meylan Suraj Nair
Tal Linzen Tina Zhu Wai Keen Vong

References

  • Anderson, J. R. (1975). Computer simulation of a language acquisition system: A first report. In R. Solso (Ed.). Information processing and cognition. Hillsdale, N.J.: Lawrence Erlbaum.
  • Banino, A., Barry, C., Uria, B., Blundell, C., Lillicrap, T., Mirowski, P., ... & Wayne, G. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705), 429.
  • Bourgin, D., Peterson, J., Reichman, D., Griffiths, T., & Russell, S. (2019). Cognitive model priors for predicting human decisions. In International Conference on Machine Learning (pp. 5133-5141).
  • Dupoux, E. (2018). Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner. Cognition, 173, 43-59.
  • Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211.
  • Fan, J. E., Yamins, D. L., & Turk‐Browne, N. B. (2018). Common object representations for visual production and recognition. Cognitive science, 42(8), 2670-2698.
  • Gopnik, A. (2017). Making AI more human. Scientific American, 316(6), 60-65.
  • Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258.
  • Hamrick, J. (2019). Analogues of mental simulation and imagination in deep learning. Current Opinion in Behavioral Sciences, 29, 8-16
  • Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40.
  • Lieder, F., & Griffiths, T. L. (2017). Strategy selection as rational metareasoning. Psychological Review, 124(6), 762-794.
  • Linzen, T., Dupoux, E., & Goldberg, Y. (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4, 521-535.
  • Nematzadeh, A., Burns, K., Grant, E., Gopnik, A., & Griffiths, T. (2018). Evaluating theory of mind in question answering. EMNLP.
  • Peterson, J., Battleday, R., Griffiths, T. L., & Russakovsky, O. (2019). Human uncertainty makes classification more robust. Proceedings of the IEEE International Conference on Computer Vision.
  • Rumelhart, D. E., McClelland, J. L., & PDP Research Group. (1986). Parallel distributed processing.
  • Tenenbaum, J. B., Griffiths, T. L., & Kemp, C. (2006). Theory-based Bayesian models of inductive learning and reasoning. Trends in cognitive sciences, 10(7), 309-318.
  • Yamins, D. L., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3), 356.