2. Conceptual landscape of explicability
2.1 Paradoxes of transparency
Intelligent systems are often referred to as black boxes, which is a pejorative, just as opacity is a metaphor. Transparency seems to be the antidote to this flaw, but it can also mean invisibility, as in "this solution change is transparent to the user", or total visibility, as in access to software source code. Yet even the deep neural networks whose operation seems the most opaque often give full access to all the features (model structure and weight file) that enable their results to be reproduced. Explaining how they work, however, requires a real construction, like understanding uncommented source code. We no longer use the term transparency, which seems to us both ambiguous and irrelevant here, but note however that the argument of decoupling transparency...
Exclusive to subscribers. 97% yet to be discovered!
Already subscribed? Log in!
Conceptual landscape of explicability
Article included in this offer
"Software technologies and System architectures"
(
229 articles
)
Updated and enriched with articles validated by our scientific committees
A set of exclusive tools to complement the resources
Bibliography
- (1) - WANG (J.) et al - Learning Credible Models. - In : Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (juill. 2018). arXiv : 1711.03190, p. 2417-2426. doi : 10.1145/3219819. 3220070. URL : http://arxiv.org/...
Exclusive to subscribers. 97% yet to be discovered!
Already subscribed? Log in!