Title: Interpretable Convolutional NNs and Graph CNNs: Role of Domain Knowledge
Fellow of the Institute of Electrical and Electronics Engineering (IEEE),
President of the International Neural Network Society (INNS)
Imperial College London, UK
Abstract: The success of deep learning (DL) and convolutional neural networks (CNN) has also highlighted that NN-based analysis of signals and images of large sizes poses a considerable challenge, as the number of NN weights increases exponentially with data volume ? the so called Curse of Dimensionality. In addition, the largely ad-hoc fashion of their development, albeit one reason for their rapid success, has also brought to light the intrinsic limitations of CNNs - in particular those related to their black box nature. To this end, we revisit the operation of CNNs from first principles and show that their key component ? the convolutional layer ? effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Such an approach is next extended to Graph CNNs (GCNNs), which benefit from the universal function approximation property of NNs, pattern matching inherent to CNNs, and the ability of graphs to operate on nonlinear domains. GCNNs are revisited starting from the notion of a system on a graph, which serves to establish a matched-filtering interpretation of the whole convolution-activation-pooling chain within GCNNs, while inheriting the rigour and intuition from signal detection theory. This both sheds new light onto the otherwise black box approach to GCNNs and provides well-motivated and physically meaningful interpretation at every step of the operation and adaptation of GCNNs. It is our hope that the incorporation of domain knowledge, which is central to this approach, will help demystify CNNs and GCNNs, together with establishing a common language between the diverse communities working on Deep Learning and opening novel avenues for their further development.
Short Bio: Danilo P. Mandic is a Professor of Machine Intelligence with Imperial College London, UK, and has been working in the areas of machine intelligence, statistical signal processing, big data, data analytics on graphs, bioengineering, and financial modelling. He is a Fellow of the IEEE and the current President of the International Neural Networks Society (INNS). Dr Mandic is a Director of the Financial Machine Intelligence Lab at Imperial, and has more than 600 publications in international journals and conferences. He has published two research monographs on neural networks, entitled "Recurrent Neural Networks for Prediction", Wiley 2001, and "Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural models", Wiley 2009 (both first books in their respective areas), and has co-edited books on Data Fusion (Springer 2008) and Neuro- and Bio-Informatics (Springer 2012). He has also co-authored a two-volume research monograph on tensor networks for Big Data, entitled "Tensor Networks for Dimensionality Reduction and Large Scale Optimization" (Now Publishers, 2016 and 2017), and more recently a research monograph on Data Analytics on Graphs (Now Publishers, 2021). Dr Mandic is a 2019 recipient of the Dennis Gabor Award for "Outstanding Achievements in Neural Engineering", given by the International Neural Networks Society. He is the 2023 Winner of The Prize Paper Award, given by the IEEE Engineering in Medicine and Biology Society for his Smart Helmet article, the 2018 winner of the Best Paper Award in IEEE Signal Processing Magazine for his article on tensor decompositions for signal processing applications, and the 2021 winner of the Outstanding Paper Award in the International Conference on Acoustics, Speech and Signal Processing (ICASSP) series of conferences. Dr Mandic served in various roles in the Word Congress on Computational Intelligence (WCCI) and International Joint Conference on Neural Networks (IJCNN) series of conferences, and as an Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, IEEE Signal Processing Magazine and IEEE Transactions on Signal Processing. He has given more than 80 Keynote and Tutorial lectures in international conferences and was appointed by the World University Service (WUS), as a Visiting Lecturer within the Brain Gain Program (BGP), in 2015. Danilo is currently serving as a Distinguished Lecturer for the IEEE Computational Society and a Distinguished Lecturer for the IEEE Signal Processing Society. Dr Mandic is a 2014 recipient of President Award for Excellence in Postgraduate Supervision at Imperial College and holds six patents.
Title: Foundations of Transfer and Multitask Optimization and Advances with Generative AI and Large Language Models
Fellow of the Institute of Electrical and Electronics Engineering (IEEE)
Nanyang Technological University, Singapore
Abstract: Traditional optimization typically starts from scratch, assuming zero prior knowledge about the task at hand. Classical optimization solvers generally do not automatically improve with experience. In contrast, humans routinely draw from a pool of knowledge from past experiences when faced with new tasks. This approach is often effective, as real-world problems seldom exist in isolation. Similarly, artificial systems are expected to encounter numerous problems throughout their lifetime, many of which will be repetitive or share domain-specific similarities. This perspective naturally motivates the development of advanced optimizers that replicate human cognitive capabilities, leveraging past lessons to accelerate the search for optimal solutions to novel tasks. This talk will provide an overview of the origins and foundations of Transfer and Multitask Optimization and present some of the latest works on Generative AI and Large Language Models based Multi-factorial Evolutionary Algorithms for conceptual design and machine learning model distillation.
Short Bio: Professor Yew-Soon Ong (M'99-SM'12-F'18) received the Ph.D. degree in artificial intelligence in complex design from the University of Southampton, U.K., in 2003. He was chair of the School of Computer Science and Engineering at Nanyang Technological University (NTU), Singapore. Currently he is a President Chair Professor in Computer Science at NTU, and is Chief Artificial Intelligence Scientist of the Agency for Science, Technology and Research Singapore. At NTU, he also serves as co-Director of the Singtel-NTU Cognitive & Artificial Intelligence Joint Lab. His research interest is in evolutionary optimization and machine learning. He is general co-chair of the 2024 IEEE Conference of Artificial Intelligence and conference chair of 2016 IEEE Congress of Evolutionary Computation. He is also founding Editor-in-Chief of the IEEE Transactions on Emerging Topics in Computational Intelligence and a senior associate editor of IEEE Transactions on Neural Networks & Learning Systems, associate editor of the IEEE on Transactions on Cybernetics and IEEE Transactions on Artificial Intelligence. He has received several IEEE outstanding paper awards and was listed as a Thomson Reuters highly cited researcher and among the World's Most Influential Scientific Minds.
Title: Status of Human Neuroimaging Databases Worldwide Focusing on Psychiatric and Neurological Disorders
Nara Institute of Science and Technology, Japan
Abstract: In recent years, neuroimaging databases related to psychiatric and neurological disorders have provided researchers with the ability to identify both common and disorder-specific features, thereby contributing to the redefinition of disease spectra through data-driven approaches. The importance of sharing large-scale, multi-center, multi-disorder databases has been increasingly recognized as essential for translating neuroimaging insights into real-world clinical applications. In Japan, the Brain/MINDS Beyond (2018–2023) neuroimaging database project successfully established a multi-site, multi-disorder MRI database. A distinctive feature of this database is the "traveling-subject" dataset, wherein each participant was scanned at every participating site. This facilitated the development of harmonization methods to mitigate site-specific variability, as well as the creation of a generalizable diagnostic marker based on brain networks associated with psychiatric disorders. By the project's conclusion, the database had expanded to encompass 14 disorders across more than 16 sites, with over 7,000 MRI datasets. This constitutes the largest multi-site MRI database focusing on neurological and psychiatric disorders. We are now at the stage of taking this multi-center, multi-disease MRI database to the next level. In the newly launched Brain/MINDS 2.0 project (2023–), the objective is to integrate anatomical and physiological brain data within a digital framework, reconstruct it as a computational model, and develop a platform capable of simulating specific human brain functions and pathological states. This platform will also be instrumental in the development of novel therapies and the more efficient evaluation of existing treatments. To achieve these aims, we plan to establish a "Brain Data Integration Platform," which will integrate diverse datasets and enable computational modeling and simulation, in collaboration with the human and animal databases developed under Brain/MINDS (2014–2023). I will present findings from previous studies that employed large-scale brain data and computational models of psychiatric disorders, demonstrating the potential for the integration of computational models with neuro-behavioral databases.
Short Bio: Saori C Tanaka is an associate professor at Nara Institute of Science and Technology and a department head of ATR Brain Information Communication Research Lab. Group. She received a Ph.D. in 2006 at the Graduate School of Information Science and Technology, Nara Institute of Science and Technology, Japan. Her research is aimed at understanding the neural basis of decision making. In her research, she uses an approach combining non-invasive brain imaging and computational model of decision-making. She has recently focused on data-driven analysis using large-scale data and has been involved in the development and management of data sharing systems of national flagship neuroscience projects in Japan.
Title: Brain Cognition Inspired Artificial Intelligence
Vice-President of the Chinese Association for Artificial Intelligence (CAAI),
President of Chongqing Normal University
Chongqing Normal University, China
Abstract: With the synergy of big data, big computing power and large model, artificial intelligence (AI) has made breakthrough progress in surpassing some key human intelligence abilities such as visual intelligence, auditory intelligence, decision intelligence, and language intelligence in recent years. However, AI systems surpass certain human intelligence abilities in a statistical sense as a whole only. They are not true realization of these human intelligence abilities and behaviors. This talk reviews the role of cognitive science in inspiring the development of the three mainstream academic branches of AI based on Marr's three-layer framework, explores and analyses the limitations of the current development of AI. At the hardware implementation layer, the differences and inconsistencies between the mechanisms of human brain neurons and their neural system connections and those of neurons and their connections in artificial neural networks (ANNs) cause two problems. Firstly, it causes the working mechanism of deep neural networks being different from the mechanism of human cognition, which is manifested in contradictory phenomena such as the recognition results of deep neural networks being very different from human cognitive understanding. Secondly, it causes poor performance of the spiking neural networks (SNNs). At the representation and algorithm layer, there are also many differences and inconsistencies between the problem-solving strategies and mechanisms of AI systems and those of human brains, which cause inconsistencies and conflicts between the two in terms of intelligent cognition. At the computational theory layer, there are still inconsistencies between the computational and processing mechanisms of AI systems and that of human brains. In view of the above limitations, eight important future research directions and their scientific issues that need to be focused on in brain-inspired AI research are proposed: 1) Highly imitated bionic information processing; 2) Large-scale deep learning model balancing structure and function; 3) Multi-granularity joint problem solving bidirectionally driven by data and knowledge; 4) AI models simulating specific brain structures; 5) Collaborative processing mechanism with physical separation of perceptual processing and interpretive analysis; 6) Embodied intelligence integrating brain cognitive mechanism and AI computation mechanisms; 7) Intelligence simulation from individual intelligence to group intelligence (social intelligence); 8) Artificial intelligence assisted brain cognitive intelligence (AI4BI).
Short Bio: Guoyin Wang received the B.S., M.S., and Ph.D. degrees from Xi'an Jiaotong University, Xian, China, in 1992, 1994, and 1996, respectively. He worked at the University of North Texas, and the University of Regina, Canada, as a visiting scholar during 1998-1999. He had worked at the Chongqing University of Posts and Telecommunications during 1996-2024, where he was a professor, the Vice-President of the University, the director of the Chongqing Key Laboratory of Computational Intelligence, the director of the Key Laboratory of Cyberspace Big Data Intelligent Security of the Ministry of Education, the director of Tourism Multi-source Data Perception and Decision Technology of the Ministry of Culture and Tourism, and the director of the Sichuan-Chongqing Joint Key Laboratory of Digital Economy Intelligence and Security. He was the director of the Institute of Electronic Information Technology, Chongqing Institute of Green and Intelligent Technology, CAS, China, 2011-2017. He has been serving as the President of Chongqing Normal University since June 2024. He is the author of over 10 books, the editor of dozens of proceedings of international and national conferences and has more than 300 reviewed research publications. His research interests include rough sets, granular computing, machine learning, knowledge technology, data mining, neural network, cognitive computing, etc. Dr. Wang was the President of International Rough Set Society (IRSS) 2014-2017, and a council member of the China Computer Federation (CCF) 2008-2023. He is currently a Vice-President of the Chinese Association for Artificial Intelligence (CAAI), and the President of Chongqing Association for Artificial Intelligence (CQAAI). He is a Fellow of IRSS, I2CICC, CAAI and CCF.
Edward Feigenbaum (Turing Award Laureate) | WI-IAT 2001, WI-IAT 2012 |
Lotfi A. Zadeh | WI-IAT 2003 |
John McCarthy (Turing Award Laureate) | WI-IAT 2004 |
Tom M. Mitchell | WI-IAT 2004, WI-IAT 2021 |
Richard M. Karp (Turing Award Laureate) | WI-IAT 2007 |
Yuichiro Anzai | WI-IAT 2011 |
John Hopcroft (Turing Award Laureate) | WI-IAT 2013 |
Andrew Chi-Chih Yao (Turing Award Laureate) | WI-IAT 2014 |
Joseph Sifakis (Turing Award Laureate) | WI-IAT 2015, WI-IAT 2021 |
Butler Lampson (Turing Award Laureate) | WI 2016 |
Leslie Valiant (Turing Award Laureate) | WI 2016, WI-IAT 2021 |
Raj Reddy (Turing Award Laureate) | WI 2017 |
Frank van Harmelon | WI-IAT 2021 |