Trustworthy and explainable artificial intelligence for the Internet of Things (IoT)

Published 07 December, 2022


To design and deploy artificial intelligence (AI) for the next generation of IoT devices and systems, trust and explainability of AI algorithms are required. The promising Explainable AI (XAI) has been met with enthusiasm as a response to the unfathomable black-box machine learning (ML) models. The practical deployment of AI in IoT systems requires establishing a high level of trust and transparency in the AI black boxes by adding explainability, interpretability and openness into intelligent IoT to unveil the rationale behind AI predictions and decisions.

This special issue aims to investigate important challenges of XAI and its applications in IoT by obtaining current tutorials and research papers. We seek original, completed and unpublished work not currently under review by any other journal/ magazine/conference.


Topics covered:

  • AI/XAI for IoT systems
  • Federated learning for scalable IoT
  • IoT blockchain technologies and applications
  • Trust, security and privacy mechanisms for IoT
  • IoT and immersive and metaverse services
  • Wireless communication for IoT

Important deadlines:

  • Submission deadline: 15 September 2023 
  • Publication date: 30 December 2023

Submission instructions:

Please read the Guide for Authors before submitting. All articles should be submitted online; please select SI: Trust Explain AI IoT on submission.

Guest Editors:



Back to Call for Papers

Stay Informed

Register your interest and receive email alerts tailored to your needs. Sign up below.