Abstract
Transparency is a widely used but poorly defined term within the explainable artificial intelligence literature. This is due, in part, to the lack of an agreed definition and the overlap between the connected - sometimes used synonymously - concepts of interpretability and explain ability. We assert that transparency is the overarching concept, with the tenets of interpretability, explainability, and predictability subordinate. We draw on a portfolio of definitions for each of these distinct concepts to propose a human-swarm-teaming transparency and trust architecture (HST3-Architecture). The architecture reinforces transparency as a key contributor towards situation awareness, and consequently as an enabler for effective trustworthy human-swarm teaming (HST).
| Original language | English |
|---|---|
| Article number | 9310665 |
| Pages (from-to) | 1281-1295 |
| Number of pages | 15 |
| Journal | IEEE/CAA Journal of Automatica Sinica |
| Volume | 8 |
| Issue number | 7 |
| DOIs | |
| Publication status | Published - Dec 2020 |
| Externally published | Yes |
Fingerprint
Dive into the research topics of 'Human-Swarm-Teaming Transparency and Trust Architecture'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver