The Cusack Lab leads an exciting series of talks right here at Trinity. This lecture series brings together experts in Neuroscience, Psychology, Computer Science, Machine Learning and Technology to contribute their knowledge towards an integrative discussion of Deep Learning and its applications for Neuroscience. Some of these talks will be more computer science focused, some more concerned with neuroscience, and some will be right at the intersection of the two. No matter what your specialisation or interest you are encouraged to come along to each of the talks, and learn about the rapid advances being made in these fields. See below for details of the schedule.
Upcoming Talks
We are looking forward to continuing this lecture series in 2021, after the successful invited talks from 2020! Watch this space for exciting announcements and the upcoming schedule, and as always feel free to get in touch with Cliona if you have any questions, suggestions for speakers or want to join our mailing list.
We can't wait to see you all for some more exciting interdisciplinary discussions in 2021!
We can't wait to see you all for some more exciting interdisciplinary discussions in 2021!
Future talks are always being scheduled, keep an eye out for further details.
If you would like to suggest a speaker or topic, please get in touch at [email protected].
If you would like to suggest a speaker or topic, please get in touch at [email protected].
Past Talks
Deep Learning for Visual Computing in V-SENSE
Prof. Aljosa Smolic Professor of Creative Technologies School of Computer Science Trinity College Dublin 23rd January 2020 |
Artificial Intelligence (AI) has made it from science fiction into everyday life. Machine Learning (ML) enabled breakthroughs due to availability of massive data and computational resources. Deep Learning (DL) in particular disrupted all areas of visual computing (VC), including computer vision/graphics and image/video processing. The V-SENSE team of Trinity College Dublin adopted this challenge and opportunity, and made a number of significant contributions to the field of DL for VC over the last 2 years, which were published at different venues. This talk will highlight some of those including normal estimation, alpha matting, HDR tone mapping, 360 image saliency estimation, and others. A particular focus will be on applications of and relations to visual perception.
|
Learning Visual Representations Without Labels
Mathilde Caron PhD Candidate Facebook AI Research (FAIR) Inria (French Institute for Research in Computer Science and Automation) 30th January 2020 |
Mathilde Caron is currently a second-year PhD student both at Inria and at Facebook AI Research (FAIR) working on large-scale unsupervised representation learning for vision. Unsupervised learning presents as an important pathway in the field of deep learning, minimising the amount of labelled data to train state-of-the-art neural networks. Furthermore, this training method is much more biologically plausible, making the study of such unsupervised networks worthwhile for neuroscientists interested in their potential as models of vision. Mathilde's 2018 paper "Deep Clustering for Unsupervised Learning of Visual Features" was highly cited as influential for progress in unsupervised learning mechanisms. Her supervisors are Julien Mairal, Piotr Bojanowski and Armand Joulin.
|
Unsupervised Learning Predicts Human Perception and Misperception of Material Properties
Kate Storrs, PhD Alexander von Humboldt Research Fellow Justus Liebig University Giessen, Germany 19th February 2020 |
This talk discussed two projects in which unsupervised deep learning was used, in combination with large computer-rendered stimulus sets, as a framework to understand how brains learn rich scene representations without ground-truth world information. By learning to generate novel images, or learning to predict the next frame in video sequences, these models spontaneously learn to cluster images according to underlying scene properties such as illumination, shape, and material. The models' representation of material gloss was probed in detail and it was found that they excellently predicted human-perceived glossiness for novel surfaces and for network-generated images. Strikingly, the networks also correctly predicted known failures of gloss perception on an image-by-image basis – for example, that bumpier surfaces tend to appear glossier than flatter ones, even when made of identical material. A supervised DNN and several other control models fail. Perceptual dimensions, like "glossiness," that appear to estimate properties of the physical world, can emerge spontaneously by learning to efficiently encode sensory data – indeed, unsupervised learning principles may account for a large number of perceptual dimensions in vision and beyond!
|
A mathematical theory of semantic development in deep neural networks
Prof. Andrew Saxe Sir Henry Dale Fellow Department of Experimental Psychology University of Oxford 24th July 2020 Watch back at: https://www.crowdcast.io/e/dlnseries4saxe |
An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? I will describe work addressing this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Furthermore, I will show that many of these phenomena only arise in deep but not shallow networks. Thus, surprisingly, this simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities.
|
Deep Learning for Encoding and Decoding Human Brain Activity during Natural Vision and Language Comprehension
Yizhen Zhang PhD Candidate Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, United States 10th November 2020 Watch back at: https://www.crowdcast.io/e/dlnseries5zhang |
As initially inspired by biological neural networks, deep artificial neural networks have demonstrated near-human performance in some visual and language tasks. Comparing artificial neural networks to biological brains has been increasingly used to investigate neural information processing in the brain. In this talk, I will share our lab’s recent studies at the intersection of neuroscience and artificial intelligence. I will present our progress in using deep learning models to encode and decode brain activity when human subjects watching movies or listening to stories. Building upon our findings, I will cast new hypotheses or speculations as to the mechanisms by which the brain represents visual and semantic information through distributed cortical networks. Lastly, I will highlight some provisional progress in using neuroscience principles (e.g. predictive coding) to build biologically plausible models for brain research.
|
Are you interested in presenting at our lecture series, or do you have any suggestions for us? Don't hesitate to get in touch with us on any of our social media, or directly at [email protected]
If you would like to be added to the deep learning for neuroscience mailing list, please contact [email protected]
If you would like to be added to the deep learning for neuroscience mailing list, please contact [email protected]