Professor, Khalifa University of Science and Technology, Abu Dhabi
Founding Director, KU 6G Research Center
Large Perceptive Models for the future of Intelligent Connectivity
The next evolution of the Internet of Things (IoT) is not about connecting more devices — it's about making them understand us. In this talk, I introduce the emerging concept of Large Perceptive Models (LPMs): AI-driven systems that integrate large language models (LLMs) into the very fabric of IoT. LPMs act as both interpreters of multimodal IoT data and optimizers of user intent, translating raw sensor signals into meaningful narratives and converting natural language instructions into real-time control and optimization strategies This shift redefines the role of AI in IoT, from passive data processors to proactive collaborators. The result: a more human-centric, resilient, and explainable IoT, where users no longer configure devices, but simply converse with them.
Mérouane Debbah is Professor at Khalifa University of Science and Technology in Abu Dhabi and founding Director of the KU 6G Research Center. He is a frequent keynote speaker at international events in the field of telecommunication and AI. His research has been lying at the interface of fundamental mathematics, algorithms, statistics, information and communication sciences with a special focus on random matrix theory and learning algorithms. In the Communication field, he has been at the heart of the development of small cells (4G), Massive MIMO (5G) and Large Intelligent Surfaces (6G) technologies. In the AI field, he is known for his work on Large Language Models, distributed AI systems for networks and semantic communications. He received multiple prestigious distinctions, prizes and best paper awards (more than 50 IEEE best paper awards) for his contributions to both fields and according to research.com is ranked as the best scientist in France in the field of Electronics and Electrical Engineering. He is an IEEE Fellow, a WWRF Fellow, a Eurasip Fellow, an AAIA Fellow, an Institut Louis Bachelier Fellow, an AIIA Fellow and a Membre émérite SEE. He is actually chair of the IEEE Large Generative AI Models in Telecom (GenAINet) Emerging Technology Initiative and a member of the Marconi Prize Selection Advisory Committee.
Distinguished University Professor, Nanyang Technological University, Singapore
Deep Model Fusion
In recent years, we have witnessed a profound transformation in the learning paradigm of deep neural networks, especially in the applications of large language models and other foundation models. While conventional deep learning methodologies maintain their significance, they are now augmented by emergent model-centric approaches such as transferring knowledge, editing models, fusing models, or leveraging unlabeled data to tune models. Among these advances, deep model fusion techniques have demonstrated particular efficacy in boosting model performance, accelerating training, and mitigating the dependency on annotated datasets. Nevertheless, substantial challenges persist in the research and application of effective fusion methodologies and their scalability to large-scale foundation models. In this talk, we systematically present the recent advances in deep model fusion techniques. We provide a comprehensive taxonomical framework for categorizing existing model fusion approaches, and introduce our recent developments, including (1) weight learning-based model fusion and data-adaptive MoE upscaling, (2) subspace learning approaches to model fusion, and (3) enhanced multi-task model fusion incorporating pre- and post-finetuning to minimize representation bias between the merged model and task-specific models.
Dacheng Tao is currently a Distinguished University Professor and the Inaugural Director of the Generative AI Lab in the College of Computing and Data Science at Nanyang Technological University. He was an Australian Laureate Fellow and the founding director of the Sydney AI Centre at the University of Sydney, the inaugural director of JD Explore Academy and senior vice president at JD.com, and the chief AI scientist at UBTECH Robotics. He mainly applies statistics and mathematics to artificial intelligence, and his research is detailed in one monograph and over 300 publications. His publications have been cited over 140K times and he has an h-index 180+ in Google Scholar. He received the 2015 and 2020 Australian Eureka Prize, the 2018 IEEE ICDM Research Contributions Award, 2020 research super star by The Australian, the 2019 Diploma of The Polish Neural Network Society, and the 2021 IEEE Computer Society McCluskey Technical Achievement Award. He is a Fellow of the Australian Academy of Science, ACM and IEEE.
Distinguished Boya Professor, Peking University, China
Associate Dean of the School of Artificial Intelligence, Peking University
Computer Graphics in the AI Age
Physicist Richard Feynman once said: “What I cannot create, I do not understand.” The principal mission of Computer Graphics (CG) is to synthesize a digital world, which, according to Turing Awardee Ivan Sutherland, serves to “make that (digital) world look real, act real, sound real, feel real.” Conversely, the central objective of Artificial Intelligence is to understand the world and be able to act in it. As of today, neither CG nor AI has fully achieved its mission. In this talk, I will introduce recent developments in CG and illustrate how CG and AI can leverage their respective advancements to mutually accelerate progress.
Baoquan Chen is a Distinguished Boya Professor of Peking University, where he is the Associate Dean of the School of Artificial Intelligence. His research interests generally lie in computer graphics, computer vision, and visualization. He has received Best Paper Award in several prestigious conferences, such as ACM SIGGRAPH Asia (2022), ACM SIGGRAPH (2022 Honorary Mention), and IEEE Visualization (2005). He received Test-of-Time Award in ACM SIGGRAPH 2025. Chen has served as chairs of prestigious conferences such as SIGGRAPH Asia 2014, IEEE Visualization 2005, and 3D Vision 2017. He currently serves as the ACM SIGGRAPH Executive Committee Director. Chen is a Fellow of ACM and IEEE, and an inductee of both ACM SIGGRAPH Academy and IEEE Visualization Academy.
Professor and Vice Dean, Aalto University, Finland
Audio Signal Processing with the Giant FFT
Today, the Fast Fourier Transform (FFT) enables rapid processing of surprisingly long signals, a feat made possible by advancements in memory capacity and computing speed. This presentation explores how a large, one-shot FFT, coupled with spectral processing and inverse FFT, can transform audio and speech signals in various, and even unexpected, ways. One particularly valuable technique is FFT-based sample-rate conversion, which achieves an arbitrary, constant rate change using a single FFT and inverse FFT. Along with its blinding speed, this approach offers the advantages of simplified processing and the elimination of spectral imaging, a common issue with time-domain filtering techniques. Other current applications extend to creative effects for music, gaming, and film sound production, including audio time-scale modification, babble noise synthesis, and transforming music recordings into texture-like sound effects while maintaining a realistic timbre. Many of these techniques are straightforward to implement and serve as excellent examples in signal processing courses, enhancing the understanding of properties of the complex spectrum.
Vesa Välimäki is a Professor of audio signal processing at Aalto University, Espoo, Finland. He is also the Vice Dean for research and the Head of the doctoral program at the Aalto University School of Electrical Engineering. His research group belongs to the Aalto Acoustics Lab, a multidisciplinary center of high competence with excellent facilities for sound-related research. Prof. Välimäki is a Fellow of the IEEE (Institute of Electrical and Electronics Engineers), a Fellow of the AES (Audio Engineering Society), and a Fellow of the AAIA (Asia-Pacific Artificial Intelligence Association). He was a Board Member of Heureka, the Finnish Science Centre in 2017-2025. In 2008, he was the General Chair of the 11th International Conference on Digital Audio Effects DAFx-08, and in 2017, he was the General Chair of the 14th Sound and Music Computing Conference SMC-17. In 2015-2020, he was a Senior Area Editor of the IEEE/ACM Transactions on Audio, Speech and Language Processing. From 2020 to 2025, Prof. Valimäki was the Editor-in-Chief of the Journal of the Audio Engineering Society, and currently, he is the Deputy Editor-in-Chief.
Professor, Tokyo University of Agriculture and Technology, Japan
AI-inspired brain/biomedical signal processing
Toshihisa Tanaka received the B.E., M.E., and Ph.D. degrees from the Tokyo Institute of Technology in 1997, 2000, and 2002, respectively. From 2000 to 2002, he was a JSPS Research Fellow. From October 2002 to March 2004, he was a Research Scientist at RIKEN Brain Science Institute. In April 2004, he joined the Tokyo University of Agriculture and Technology (TUAT), where he is currently a Professor of Electrical Engineering and Computer Science. In April 2025, he was appointed Vice-Trustee and Assistant to the President of TUAT. He also heads the Research Unit of Informatics for Human-Animal Interaction at the One Welfare Research Institute of TUAT. He was a Royal Society Visiting Fellow at the Communications and Signal Processing Group, Imperial College London, U.K., in 2005, and a Visiting Faculty Member in the Department of Electrical Engineering at the University of Hawaii at Manoa in 2011. His research interests include signal processing and machine learning, with particular emphasis on brain and biomedical signal analysis, brain–computer interfaces, and human–animal interaction. He is a co-editor of Signal Processing Techniques for Knowledge Extraction and Information Fusion (Springer, 2008) and the leading co-editor of Signal Processing and Machine Learning for Brain–Machine Interfaces (IET, 2018). Prof. Tanaka has served as an Associate Editor or Guest Editor for several international journals, including IEEE Access, Neurocomputing, IEICE Transactions on Fundamentals, Computational Intelligence and Neuroscience, IEEE Transactions on Neural Networks and Learning Systems, Applied Sciences, Advances in Data Science and Adaptive Analysis, and Neural Networks. He also served as Editor-in-Chief of Signals. He was the General Co-Chair of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) held in Tokyo in 2021. He is currently a Senior Area Editor of IEEE Signal Processing Letters and Vice President for Member Relations and Development of APSIPA. He is serving as the General Chair of the 2028 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2028), to be held in Tokyo, Japan. He is a Senior Member of the IEEE and a member of IEICE, APSIPA, the Society for Neuroscience, and the Japan Epilepsy Society. He is also the Co-founder and CTO of Sigron, Inc.