IEEE Symposium on Explainable, Responsible, and Trustworthy CI (IEEE CITREx)

Symposium Chair: Alfredo Vellido
Technical Activities Liaison/Strategy: Matthew Garratt
Symposium Technical Chair: Iván Olier
Symposium Publicity Chair: Bertha Guijarro
Symposium Industry Co-Chair: Pedro Larrañaga
Symposium Industry Co-Chair: Karina Gibert
Symposium Publication Co-Chair: Caroline König
Symposium Publication Co-Chair: Vicent Ribas



Scope

The development of explainable, responsible, and trustworthy computational intelligence (CI) and artificial intelligence (AI) is essential to foster transparency, accountability, and user confidence in the increasingly pervasive role of these technologies. By ensuring that AI/CI systems can be understood, monitored, and relied upon to make decisions that comply with regulations and are bound by ethical principles, we not only mitigate potential risks and biases but also empower users to engage with these technologies with greater assurance, fostering a harmonious integration of AI into our daily lives.

The aim of the symposium is to discuss the ethical principles that govern the behaviour of AI/CI technology under the light of current regulatory efforts, as well as the operator, user and other stakeholders who are impacted by decisions informed by such technologies. Moreover, the symposium will also explore how clear, understandable, and interpretable explanation of AI/CI decisions can be made to enhance transparency and foster user trust. The symposium will help promote the following principles: balancing the ecological footprint of technologies against the economic benefits; managing the impact of automation on the workforce; ensuring privacy is not adversely affected; and dealing with the legal implications of embodying AI/CI technologies in autonomous and automated systems.

We are seeking contributions that address either theoretical developments or practical applications, presenting innovative approaches and technological advancements within Explainable, Responsible and Trustworthy CI.

Topics of interest include, but are not limited to:

Foundations of

  • Explainable CI
  • Interpretable CI

Explainable and Interpretable CI

  • Explainable data analytics
  • Explainable control systems
  • Explainable models
  • Extracting understanding of datasets
  • Safety critical systems
  • Visualisation methods
  • Intrinsically interpretable methods
  • Ante-hoc vs post-hoc methods
  • Application domain specificity of methods

Trustworthy CI

  • Bias in CI methods
  • Fairness of CI methods
  • Accountability of CI methods
  • Transparency of CI methods
  • Role of explainable and interpretable methods
  • Human-machine trust and risk
  • Performance benchmarks for trust
  • Public perception
  • Role in politics
  • Role in manipulating public opinion

Responsible CI

  • Impact on the human workforce
  • Impact on the distribution of wealth
  • Impact on human cognition
  • Impact on social relatedness
  • Impact on the environment
  • Privacy
  • Ethics
  • Ethics in specific application domains
  • Legal implications
  • Legal implications in specific application domains
  • Regulatory frameworks
  • Governance
  • Standards

People Centred CI

  • Public engagement
  • Co-production