AUTHOR
B. Spyropoulos and G. Papagounos
ABSTRACT
The operation of medical Artificial Intelligence systems depends on a large number of parameters and on the availability of an evaluative calculus which assigns relative importance to the various information items since the data relevant to a case do not have equal weight in the decision making process which leads to diagnosis and treatment.
The two components of A.I. systems, knowledge-base and inference methods based on the experience of physicians, do not usually incorporate socio-cultural or axiological characteristics. Further, the decisions reached entail ethical implications both in respect to the patient concerned and to the broader context of the health care system in a given community. Specifically, the authors argue that ethical issues and value conflicts pertaining to A.I. systems in medicine arise first, in terms of the development and the employment of the systems themselves. Second, they emerge in the interaction -the diagnostic and the therapeutic- between the patient and the health-care professional and, third, ethical problems appear in respect to the resources -human and material- allocated in dealing with the cases in question. As a result, A.I. systems should allow for a casuistic approach to the decision making process in order to accommodate the particularities of each individual patient.
The role of an A.I.-system depends much more on the proximity of the description of the empirical disease to the well defined and theoretically supported medical knowledge and much less on the programming abilities and the sophisticated computer technology used that would enable the processing of the huge amount of data, often leading to a diagnostic “information pollution”. On the other hand, the available computer technology often offers the possibility to provide theoretical foundations to clinical practice since it constitutes a powerful instrument for the acquisition and dissipation of medical knowledge. It is precisely this role of theoretically supported and experimentally enriched approach to medical practice that an A.I.- system is called to fulfil.
The main challenge, however, which artificial decision supporting systems face is the incorporation of the social and the ethical premises into the inference model employed. Such an implication, however, necessitates the identification of the ethical issues which are involved in the decisions made in a health care context. Further, it requires the codification and the classification of the ethical problems. Additionally, this procedure should be focused on the various sections and the ensuing activities of the hospital since problems are not identical nor do they carry the same weight in all departments.
This process, can reach its goal only if it is accompanied by the education of those involved in the decision making processes in the hospital. This means, that the reasoning models employed in the inferences at the various sections of the hospital should be assimilated, and the ethical issues present in health care delivery should become familiar to the personnel involved in the decision-making procedure.