CUSACK LAB
  • Home
  • Participate
  • Research
  • Media
  • Publications
  • SEMINARS
  • People
  • Vacancies
  • News
  • Contact

NEWS

Making Machines That Learn Like Humans

2/11/2020

3 Comments

 
A core goal of the lab is to make machines learn more like human infants, and here is a summary of some of our recent work in this area.
Self-supervised Babies
Developmental psychologists have shown that infants are learning many things in their first year. However, linguistic understanding is primitive until the end of the year, and so their learning must be "self-supervised", in that they can learn without being explicitly taught. At present, machines are mostly taught using hand-curated datasets, which are painstakingly labelled by humans. Self-supervised learning algorithms can potentially reduce the dependence on these datasets, and so are of great interest to the machine learning community.
In an arXiv preprint​, Lorijn Zaadnoordijk from the lab, and our collaborator Tarek Besold have reviewed the developmental psychology literature to identify potential "next big thing(s)" for this area of machine learning.
Learning Semantics
Humans have a deep understanding of the world. When we recognise an object, we know what other things it is similar to and we can classify it as part of some superordinate category. This type of knowledge is called semantic knowledge. Cliona O'Doherty has been testing the idea that by observing the co-occurrences of objects in the world, infants could not just learn how to recognise things, but also learn about semantics. She has done this by setting up a computational model using a deep neural network. 

Cliona O'Doherty will present SemanticCMC - improved semantic self-supervised learning with naturalistic temporal co-occurrences at the workshop Self-supervised learning: theory and practice at Neural Information Processing Systems (NeurIPS) 2020.
How Can Random Networks Explain the Brain So Well?
A part of the brain called the inferotemporal (IT) cortex is critical for humans and other monkeys to visually recognise objects. Currently, deep neural networks are the best models of brain responses in the IT cortex of adults. It has been argued that this is because the visual features that deep neural networks learn for object recognition are the same as those IT uses. But, Anna Truzzi has been investigating a conundrum, which is that actually untrained (or random) deep neural networks also do a surprisingly good job of modelling IT activity.
Anna presented the paper "Convolutional Neural Networks as a Model of Visual Activity in The Brain: Greater Contribution of Architecture Than Learned Weights" at the workshop Bridging AI and Cognitive Science​ at the International Conference on Learning Representations (ICLR) 2020. She will also be presenting at the NeurIPS2020 workshop Shared Visual Representations in Humans and Machine Intelligence, with the title "Understanding CNNs as a model of the inferior temporal cortex: using mediation analysis to unpack the contribution of perceptual and semantic features in random and trained networks". This work is also directly relevant to neuroscientists, and was presented at the neuromatch 1.0 conference with the title "Are deep neural networks effective models of visual activity in the brain because of their architecture or training?".

References
  • O'Doherty, C. and Cusack, R. (2020) "SemanticCMC - improved semantic self-supervised learning with naturalistic temporal co-occurrences" Workshop: Self-supervised learning: theory and practice, NeurIPS.
  • Truzzi, A. and Cusack, R. (2020) "Convolutional Neural Networks as a Model of Visual Activity in The Brain: Greater Contribution of Architecture Than Learned Weights" Workshop: Bridging AI and Cognitive Science​, International Conference on Learning Representations.
  • Truzzi, A. and Cusack, R. (2020) "Are deep neural networks effective models of visual activity in the brain because of their architecture or training?" Neuromatch conference 1.0
  • Truzzi, A. and Cusack, R. (2020) "Understanding CNNs as a model of the inferior temporal cortex: using mediation analysis to unpack the contribution of perceptual and semantic features in random and trained networks" Workshop: Shared Visual Representations in Humans and Machine Intelligence, NeurIPS.
  • Zaadnoordijk, L, Besold, T.R. and Cusack, R. (2020) "The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons from Infant Learning" arxiv:2009.08497.
3 Comments

    Receive our newsletter

    To receive an occasional newsletter with news from our research group, please click here.

    Author

    Rhodri Cusack
    ​Neuroscientist

    Archives

    November 2020
    November 2018
    October 2018
    September 2018
    May 2018
    October 2017
    March 2017
    December 2016
    October 2016

    Categories

    All

    RSS Feed

Neuroscience at the Cusack lab

Home
Participate
Publications
People
News
Contact
© COPYRIGHT 2016-22. ALL RIGHTS RESERVED.
  • Home
  • Participate
  • Research
  • Media
  • Publications
  • SEMINARS
  • People
  • Vacancies
  • News
  • Contact