Visit and Guest Lecture by Prof. Plamen Angelov: Bringing Deep Learning and Reasoning Closer

Guest Lecture “Bringing Deep Learning and Reasoning Closer” by Prof. Plamen Angelov:  January 24, 14:30, Room J1630 

Deep Learning continues to attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. However, the mainstream approach of end-to-end iterative training of a hyper-parametric, cumbersome, and opaque model architecture led some authors to brand it “black box”. Cases were reported when such models can give wrong predictions with high confidence – something that jeopardises the safety and trust. Deep Learning is focused on accuracy and overlooks explainability and the semantic meaning of the internal model representations, reasoning and its link with the problem domain. In fact, it shortcuts from the large amount of (labelled) data to the predictions bypassing and substituting the causality with correlation and error minimisation. It relies on assumptions about the data distributions that are often not satisfied and suffers from catastrophic forgetting when faced with continual and open set learning. Once trained, such models are inflexible to new knowledge. They are good only for what they were originally trained for. Indeed, the ability to detect unseen and unexpected and start learning this new class/es in real time with no or very little supervision (zero- or few- shot learning) is critically important but is still an open problem. The challenge is to fill the gap between the high levels of accuracy and the semantically meaningful solutions.

This talk will focus on “getting the best from both worlds”: the powerful latent feature spaces formed by pre-trained deep architectures such as transformers combined with the interpretable-by-design (in linguistic, visual, semantic, and similarity-based form) models. One can see this as a fully interpretable frontend and a powerful backend working in harmony. Examples will be demonstrated from the latest projects from the area of autonomous driving, Earth Observation, health and a set of well-known benchmarks.

Biographical data of the speaker:

Prof. Angelov (PhD 1993, DSc 2015) holds a Personal Chair in Intelligent Systems at Lancaster University and is a Fellow of the IEEE, IET, AAIA and of ELLIS. He is member-at-large of the Board of Governors (BoG) of the International Neural Networks Society (INNS) and of the Systems, Man and Cybernetics Society of the IEEE (SMC-S) as well as Program co-Director of the Human-Centered Machine Learning for ELLIS. He has 400 publications in leading journals, peer-reviewed conference proceedings, 3 granted patents, 3 research monographs (published by Springer (2002 and 2018) and Wiley, 2012) cited over 15000 times (h-index 63). Prof. Angelov is has an active research portfolio in the area of explainable deep learning and its applications to autonomous driving, Earth Observation and pioneering results in online learning from streaming data and evolving systems. His research was recognised by multiple awards including 2020 Dennis Gabor award “for outstanding contributions to engineering applications of neural networks”. He is the founding co-Editor-in-Chief of Springer’s journal on Evolving Systems and Associate Editor of other leading scientific journals, including IEEE Transactions (IEEE-T) on Cybernetics, IEEE-T on Fuzzy Systems, IEEE-T on AI. He gave over 30 keynote talks and co-organised and co-chaired over 30 IEEE conferences (including several IJCNN), workshops at CVPR, NeurIPS, ICCV, PerCom and other leading conferences. Prof Angelov chaired the Standards Committee of the Computational Intelligent Society of the IEEE initiating the IEEE standard on explainable AI (XAI). More details can be found at www.lancs.ac.uk/staff/angelov

2024-01-24T13:08:03+00:00