Conceptual landscape of explicability
Explainability in Artificial Intelligence; towards Responsible AI
Quizzed article REF: H5030 V1
Conceptual landscape of explicability
Explainability in Artificial Intelligence; towards Responsible AI

Authors : Daniel RACOCEANU, Mehdi OUNISSI, Yannick L. KERGOSIEN

Publication date: December 10, 2022, Review date: February 29, 2024 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

2. Conceptual landscape of explicability

2.1 Paradoxes of transparency

Intelligent systems are often referred to as black boxes, which is a pejorative, just as opacity is a metaphor. Transparency seems to be the antidote to this flaw, but it can also mean invisibility, as in "this solution change is transparent to the user", or total visibility, as in access to software source code. Yet even the deep neural networks whose operation seems the most opaque often give full access to all the features (model structure and weight file) that enable their results to be reproduced. Explaining how they work, however, requires a real construction, like understanding uncommented source code. We no longer use the term transparency, which seems to us both ambiguous and irrelevant here, but note however that the argument of decoupling transparency...

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


Article included in this offer

"Software technologies and System architectures"

( 229 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details
Contact us