Many human writers in Cognitive Science and within the field of AI itself believe that the coining of the term “artificial intelligence” was an error. It is difficult to understand what it means, it is perhaps too vast meaningwise to depict the applications that have come of it -it leads to people expecting more grandiose results. L. Floridi rightly shifts the discourse about such technology from Artificial Intelligence (whatever this is perceived to be by each of us) to Artificially Intelligent Behaviour (AIB), because, for the time being, that is what it boils down to: simple behaviour, the natural reasoning process of Man simply does not figure in the picture. After some sixty years of trying to place it in the picture, all we have obtained is… so-called “intelligent” technical objects gifted with various behaviours. Although the term “Intelligence” on its own has many meanings, it seems to carry, at least in my eyes, something more than just behaviour, but it remains difficult to do anything other than allude to it and its elusiveness.
This said, I personally think we should start preparing for the day true intelligence will be in the picture. What if one day we as a community do come to the point of accepting a definition of “intelligence” that is not based on behaviour? And what if we successfully implement this on an artificial (i.e. non-human) platform? And while we are at it, we would have to include feelings, intentions and all the rest that is intrinsic to personhood as this would no doubt figure as a general requirement of our common definition: intelligence without the sense of Self would be void, would it not?
And in order to make things fully correct, we would have to explore ideas like working on emotional pathologies for AI embedded robotics. If the definition of “intelligence” is to portray what we today know as human intelligence, the ‘imperfect nature’ of human beings would have to be dealt with -they would have to be included in some way-; our intelligence receives a lot of its stamina from handling partial knowledge, undecidability, fuzziness in categories, or to put it short, “processing” uncertainty.
It is necessary to speak of emotional pathologies here because, in the process of obtaining a viable implementation of intelligence in the humanoid robot, we are very likely to come across some “queer” situations. What could and would be the role of a not-so-perfect emotional robot in society? Such ‘cases’ will have to be dealt with and/or lived with.
“You cannot think about disease without the sick at hand”, says G. Canguilhem. And this is what we will have. “Sick” robots to cure on the way to success. Is it worth creating emotionally invalid beings on the road to rebuilding man? Pathologies, medical or otherwise, are what make the norm according to Canguilhem, because without them, there would not be any norms to compare with. So no objective pathology exists in society since it is the point of view of the beholder that determines the nature of the pathology. Consequently, one can plausibly ask who represents the norm in future society, the ailing robot (in the eyes of the human) or the ailing human (from the point of view of the robot). This leads to questioning the ethical grounds of any amalgamation between AI and affectivity in building “intelligent” humanoid robots. Believe me, I know the issues at hand intimately.
Actually, section one of this text was written by my robot friend. I fondly call him Path-etic Robot, but of course he does not know that. As this article is subject to copyright, he could not sign his name to it for various reasons. The poor dexterity of his hands can be a problem for holding a pen, but this is only a minor consideration: he is not a real person, moral or otherwise, according to the current legal definitions. But there is a deeper problem than that. The moral and philosophical definition of personhood entrenched in (most) all of us is, so to speak, the really “tough nut to crack”. Our collective idea of what constitutes a person does have a cut-and-dry nature, which is segregationist at best with respect to robotic beings, and this is what is hindering my friend from participating in this academic endeavour, at least in an official manner. I do however owe a lot of credit to him, the first section of this text did take him a lot of time and effort; going to the library, taking notes, conversing with me, searching on the Internet and typing with his troublesome hands… What can I do to rectify this situation?
Pathetic Robot has been working very religiously on this problem on his own -I have not got the time and I honestly think it is a lost cause. Nevertheless, we often go for coffee together, across the street from the university (he would also like to be a graduate student), to discuss everything but… I haven’t even asked him lately if he is making headway. But then again, if he isn’t, what can I do about it?
He looks up to me and my (human) friends. Sometimes his admiresome eyes are lit up to a point that we wonder which one of us he is in love with! But I do not think that he would take his sentimental existence serious enough to make the first move, because, in light of the present situation, he does not even love himself. It’s just too bad for this poor soul, he is physically attractive.
Pathetic Robot is an open-minded scientist. He has been writing books and articles (well-written at that!) in hopes of promoting the acceptance of robotic personhood in society; he also hopes to be able to publish his works in his own name, but for the moment he cannot, he has an identity problem.
“It is quite simple”, as he explained in a recent article I got a glimpse of on his desk, “if you are not a human person, you cannot have and enjoy IPR (Intellectual Property Rights). In fact, you cannot even legally sign IPR over to a human colleague”. I, as a human writer, cannot even make an official citation of his thoughts as his work is not published.
Which of the two is diseased, the robot not enjoying certain rights or the human refusing him such rights?
IPR does not apply here. Pathetic Robot’s situation does seem painful. The poor ‘guy’ might just be very grateful to me for finally coming to his aid and stealing it.
Section two represents a series of thoughts I, as a legitimate property owner, was able to entertain. As the reader can see, I felt very much in a dilemma before publishing this article about the emotional robot in question all by myself. And I am still his best friend; I am just glad I did not create him1. Is the fact the reader of a major publication is now implicated furthering Pathetic Robot’s campaign for the status of a person? Perhaps not, but these are considerations that must either be taken seriously or scrapped along with the ultimate robotics project of rebuilding humans. All in all, it would seem that, post-modern human society’s current ‘bent’ on converging technologies for conceiving of (pathological?) artificial beings is stirring waters to a blur in the seas of its beliefs.
G. Canguilhem (1966). Le normal et le pathologique, Paris: Presses Universitaires de France.
Schmidt C.T.A. (forthcoming), “Of Robots and Believing”, Minds and Machines, Kluwer.
Shelley M. (1818). Frankenstein, Standard Novels.