Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy

Michal Klincewicz*, Lily Frank

*Corresponding author for this work

    Research output: Contribution to journalConference article

    Abstract

    This paper provides an analysis of the way in which two foundational principles of medical ethics-the trusted doctor and patient autonomy-can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.

    Original languageEnglish
    JournalCEUR Workshop Proceedings
    Volume2681
    Publication statusPublished - 2019
    Event2nd EXplainable AI in Law Workshop, XAILA 2019 - Madrid, Spain
    Duration: 11 Dec 2019 → …

    Keywords

    • Ethics
    • Explainability
    • Health care
    • Machine learning

    Fingerprint

    Dive into the research topics of 'Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy'. Together they form a unique fingerprint.

    Cite this