Limitations of Artificial Intelligence
There's no chance of people working in AI producing a machine that displays true intelligence until they give that machine the ability to learn from the testimony of other agents. I have argued for this claim in a number of articles, but I'll briefly summarise here how I came to this conclusion.
In the late 1990s I became interested in how people acquire information from testimony. Although most of the knowledge anybody has has been acquired by accepting what other people have said or written, until recently, not much of epistemology was devoted to this topic. Philosophers tended to focus their attention on how we acquire knowledge through perception. There's a certain irony in people who spend a large part of their lives reading the works of others believing that perception is the most important source of knowledge that we have.
I have devised a anti-justificationist, two-mode theory of how people learn from testimony and have developed this in several publications. My main interest in testimony was epistemological, but I soon realised that, if androids were ever to live amongst humans and interact meaningfully with us, they too would have to have the ability to acquire information from testimony. Sadly, hardly anyone working in AI or Robotics is trying to work out how to give machines this ability. I have made some suggestions about how this could be done, but much more work needs to be done on this topic before androids will be truly intelligent.
© Antoni Diller (26 April 2014)