Keynotes

Applications and Challenges of Artificial Intelligence with Edge Systems in Promoting the Sustainable Development Goals

Arun Kumar Sangaiah (Senior Member, IEEE) received his Ph.D. from the School of Computer Science and Engineering, Vellore Institute of Technology, India, in 2014. He has served as a Visiting Scientist at the Chinese Academy of Sciences (CAS), Beijing, and as a Visiting Researcher at Université Paris-Est, France. He is currently a Full Professor at National Yunlin University of Science and Technology, Taiwan. His research has been supported by several international projects, and he has published over 300 journal articles, 11 edited books, and one patent. Dr. Sangaiah has received multiple honors including the Clarivate Highly Cited Researcher, Yushan Young Scholar fellowship, and PIFI-CAS fellowship. He also serves as Editor-in-Chief and Associate Editor for various ISI journals.

This keynote addresses the transformative potential of Artificial Intelligence (AI) and edge systems in accelerating progress toward the United Nations Sustainable Development Goals (SDGs). By bringing intelligence closer to data sources, edge-enabled AI enhances real-time decision-making, resource efficiency, and sustainability across diverse sectors. The talk highlights practical applications such as AI-driven solutions for sustainable agriculture, biomass estimation, and water quality prediction—demonstrating how technology can support ecological balance and societal well-being. It also examines key challenges, including data quality and accessibility, ethical and regulatory considerations, and the critical need for locally adaptable, inclusive approaches. Through these perspectives, the keynote underscores how context-aware, interdisciplinary innovation at the intersection of AI and edge computing can serve as a powerful catalyst for sustainable development worldwide.

Prof. Arun Kumar Sangaiah

National Yunlin University of Science and Technology, Taiwan

Prof. Muhammad Tariq

National University of Computer and Emerging Sciences (NUCES), Pakistan

Illuminating the Black Box: The New Era of Explainable AI in Critical Infrastructure

Muhammad Tariq (Senior Member, IEEE) received his M.S. from Hanyang University, South Korea, and his Ph.D. from Waseda University, Japan, as a MEXT Scholar. He is a Professor and Head of the School of Engineering at NUCES, Islamabad, and a Visiting Research Collaborator at Princeton University, USA. His research covers IoT, network science, digital twins, cybersecurity, and smart grids, with numerous publications in top IEEE journals. He has also been recognized among Stanford University’s list of the world’s top 2% scientists.

As artificial intelligence (AI) continues to transform critical infrastructure, ranging from healthcare and smart grids to intelligent transportation and renewable energy systems, its potential for enabling automation, predictive intelligence, and operational resilience is undeniable. Yet, the opacity of conventional “black-box” AI models raises pressing concerns about trust, accountability, and regulatory compliance in safety-critical domains.
This keynote delves into the paradigm shift toward explainable and interpretable AI (XAI), where transparency becomes a cornerstone of responsible innovation. It will highlight cutting-edge interpretability techniques such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-Agnostic Explanations), and Integrated Gradients, demonstrating how these methods unveil the reasoning behind AI-driven decisions. Through real-world examples and applications, the session underscores how explainable AI can bridge the gap between performance and trust, empowering engineers, policymakers, and stakeholders to confidently deploy AI within mission-critical environments. Real-world case studies will demonstrate the application of XAI in:
  • Clinical decision support systems in healthcare,
  • Solar power forecasting and battery SoC/SoH estimation in renewable energy,
  • Fault detection, anomaly prediction, and load balancing in smart grid systems, and
  • Decision transparency and risk assessment in intelligent transportation networks.
The keynote will also address the challenges of deploying interpretable AI at scale, the balance between accuracy and explainability, and the emerging synergy between XAI, Digital Twin technology, and Agentic AI frameworks. Attendees will gain a comprehensive perspective on how XAI can drive responsible, reliable, and human-centric innovation in next-generation infrastructure systems.
Scroll to Top