.

Thursday 7 February 2008

Reframing the question

This semester, I'm going to be studying the philosophy of artificial intelligence, a field which is of great interest to me. To get me in the right frame of mind, I've been watching a fair bit of science fiction lately, particularly that dealing with the future of robotics/cybernetics/whatever. The one with the most philosophy behind it thus far has to be Bicentennial Man, the story of a robot with a "flaw" allowing him to be creative and develop his own character.

While it's hardly groundbreaking, it does give you a good idea of the sorts of issues that exist around the philosophy of artificial intelligence - the main one being, of course, when (if ever) is it a person? As soon as it demonstrates creativity? Or only when it becomes mortal?

Today I received a book I had ordered, called Imitation in Animals and Artifacts. Flicking through this, it occurred to me that a question I had never heard asked was, rather than "when is a robot to be considered a person?", "when is a robot to be considered equivalent to an animal?". Maybe this is a nonsense, or leads nowhere useful, but it's one I'd like to look into in more detail - if only to shed light on the personhood question. Perhaps I'll have the time to do so soon. For now, I'll just jot down some ideas:

If personhood is based on self-awareness, are there animals that we should consider to be persons? And how do we know when something is self-aware, if we have no way to explicitly communicate?

What would be the criteria for animalism?

Does the fact that AI is usually intended to simulate human intelligence mean that this is a pointless debate? Or is this the aim of AI because it has already reached or surpassed the intelligence level of animals?

Is a computer that can beat any human player at chess more intelligent than an animal who lives in a complex social system and adapts to its surroundings?

Maybe this will be my focus for my artificial intelligence module this coming semester.

No comments: