Special issue on explainable dynamic data-driven systems for Smart Society 5.0

Published 24 November, 2022

Introduction:

The first 20 years of the new millennium have been overwhelmed by the global information and communication technologies (ICT) tsunami. The continuous evolution of ICT has had a substantial impact on people, pushing them to live in a society that daily interfaces with intelligent systems. This has led to the concept of Smart Society 5.0, a term that describes the ideal shape for a future society in which intelligent systems, exploiting technological platforms to their full potential, are able to process huge amounts of data and analyse complex scenarios. In this future, people will be constantly supported by advanced tools within interconnected societies that are driven by digital transformation, supported by artificial intelligence (AI). However, these systems are often considered unreliable as they hide the underlying decision-making processes. The explainable AI paradigm aims to reduce the harmful implications caused by the opacity of these intelligent systems, investigating methodologies to ensure principles of fairness, responsibility and transparency.

This special issue will feature high-quality scientific contributions addressing novel aspects of intelligent systems that significantly improve the quality of people’s lives and their confidence in the decisions provided by such systems.

Topics covered:

  • Argumentation theory for explainable AI
  • Decision model visualisation tools
  • Designing new explanation styles
  • Ethics in explainable AI
  • Evaluations of decision-making metrics and processes
  • Evaluations of the transparency and interpretability of AI Systems
  • Explainable AI in risk management
  • Explainable and responsible AI systems for financial risk management
  • Explainable data mining and data profiling
  • Explainable decision-making processes
  • Explainable human-in-the-loop, dynamic, data-driven systems
  • Fairness and bias auditing
  • Human-machine interaction for explainable AI
  • Industry 4.0/5.0 systems
  • Information system management
  • Internet of Things (IoT)-based systems
  • Interpretable and transparent machine learning models
  • Monitoring and understanding system behaviour
  • Natural language generation for explainable AI
  • Privacy by design approaches for human data
  • Privacy-preserving explanations
  • Property risk assessment using explainable AI
  • Successful applications of interpretable AI systems
  • Technical aspects of algorithms for explanation
  • Theoretical aspects of explanation and interpretability

Important deadlines:

Submission deadline: 30 September 2023

Submission instructions:

Please read the Guide for Authors before submitting. All articles should be submitted online; please select SI: Explainable dynamic data-driven systems on submission.

Guest Editors:

  • Dr. Gaetano Cimino, Department of Computer Science, University of Salerno, Italy. Email: gcimino@unisa.it
  • Dr. Stefano Cirillo, Department of Computer Science, University of Salerno, Italy. Email: scirillo@unisa.it
  • Dr. Aftab Alam, Department of Management Sciences, Abasyn UniversityPeshawar, Pakistan. Email: aftab.alam@abasyn.edu.pk

Back to Call for Papers

Stay Informed

Register your interest and receive email alerts tailored to your needs. Sign up below.