A core goal of the lab is to make machines learn more like human infants, and here is a summary of some of our recent work in this area. Self-supervised Babies Developmental psychologists have shown that infants are learning many things in their first year. However, linguistic understanding is primitive until the end of the year, and so their learning must be "self-supervised", in that they can learn without being explicitly taught. At present, machines are mostly taught using hand-curated datasets, which are painstakingly labelled by humans. Self-supervised learning algorithms can potentially reduce the dependence on these datasets, and so are of great interest to the machine learning community. In an arXiv preprint, Lorijn Zaadnoordijk from the lab, and our collaborator Tarek Besold have reviewed the developmental psychology literature to identify potential "next big thing(s)" for this area of machine learning. Learning Semantics Humans have a deep understanding of the world. When we recognise an object, we know what other things it is similar to and we can classify it as part of some superordinate category. This type of knowledge is called semantic knowledge. Cliona O'Doherty has been testing the idea that by observing the co-occurrences of objects in the world, infants could not just learn how to recognise things, but also learn about semantics. She has done this by setting up a computational model using a deep neural network. Cliona O'Doherty will present SemanticCMC - improved semantic self-supervised learning with naturalistic temporal co-occurrences at the workshop Self-supervised learning: theory and practice at Neural Information Processing Systems (NeurIPS) 2020. How Can Random Networks Explain the Brain So Well?
A part of the brain called the inferotemporal (IT) cortex is critical for humans and other monkeys to visually recognise objects. Currently, deep neural networks are the best models of brain responses in the IT cortex of adults. It has been argued that this is because the visual features that deep neural networks learn for object recognition are the same as those IT uses. But, Anna Truzzi has been investigating a conundrum, which is that actually untrained (or random) deep neural networks also do a surprisingly good job of modelling IT activity. Anna presented the paper "Convolutional Neural Networks as a Model of Visual Activity in The Brain: Greater Contribution of Architecture Than Learned Weights" at the workshop Bridging AI and Cognitive Science at the International Conference on Learning Representations (ICLR) 2020. She will also be presenting at the NeurIPS2020 workshop Shared Visual Representations in Humans and Machine Intelligence, with the title "Understanding CNNs as a model of the inferior temporal cortex: using mediation analysis to unpack the contribution of perceptual and semantic features in random and trained networks". This work is also directly relevant to neuroscientists, and was presented at the neuromatch 1.0 conference with the title "Are deep neural networks effective models of visual activity in the brain because of their architecture or training?". References
3 Comments
|
Receive our newsletterTo receive an occasional newsletter with news from our research group, please click here.
AuthorRhodri Cusack Archives
November 2020
Categories |