Automating information acquisition
Since 1999, I have been developing and elaborating a two-mode theory of how people acquire propositional information from testimony. Most of the time, when we are listening to others or reading what they have written, we operate according to the first mode. We respond to testimony as if that response were governed by the defeasible rule simply to accept the assertions we encounter. Being defeasible, this acquisition rule can be overridden. Typically, the flow of information we are exposed to is quite rapid. Imagine listening to someone relate what they got up to when they went on holiday. Assertion follows assertion in quick succession. We do not have time to carefully investigate the correctness of every statement. However, certain features of an assertion may make us wary of its truth. These overriding factors are learnt and they are different for different people.
Overriding factors vary from person to person
On the 30th of October 2013, David Cameron said, in the House of Commons, "Fuel poverty went up under Labour, and under this government, we've maintained the winter fuel payments, [and] we have increased the cold weather payments." Members of his own party are likely to accept these statement without further ado and feel good about themselves, but a socialist is likely to be more sceptical about their veracity.
The time-consuming second mode
Fariha Karim, on the Channel 4 website, checked the correctness of Cameron's assertions. In doing this, she was operating according to the second mode. In this mode we thoroughly investigate a small number of the statements we come across in order to work out if they are true or not. We can only do this for a very small number of the assertions we encounter because such checking is time-consuming and also because of the snowball effect: in checking any statement we have to assume the correctness of many other statements.
Implementing the acquisition rule
I think it is extremely unlikely that computers will ever be able to operate according to the second mode, but they will probably be able one day to operate according to the first mode in very limited domains. Several years ago, one of my students built a simple system which illustrates this. It should be noted that, when operating according to this first mode, we tend to believe most of the assertions we encounter. Whether or not they are actually true is another matter. The truth of a statement depends on how things are in the world. Apart from self-contradictory statements, we cannot tell that an assertion is false just by investigating properties of that assertion in isolation.
Rejecting rumours found in the social media
The system Lindsay (2004) built extracted information from an internet message board, an early form of social media, that dealt with rumours about football transfers in the English Premier Division. The assertions it evaluated were of two forms, namely either "player X is about to join club Y" or "player X is about to leave club Y". As explained in Diller (2002), an overriding factor can be incorporated into a rule. For example, usually, we do not believe that the assertions made by actors in a play are true in the real world. As a first approximation, our response to such assertions can be seen as being governed by the conditional rule, "If assertion P is uttered during a play by an actor, do not accept P." The defeasible acquisition rule can be modelled as an ordered set of such rules followed by the non-defeasible rule to accept the assertion P. Such a collection I call an assessment component.
Rather than having a hardwired set of assessment-component rules, Lindsay's system tried various sets of rules to see which was the best. In order to use these his system had to keep track of the informant and the date of the posting. He first isolated ten or so possible relevant factors. These included the credibility of the informant, belief-density (a measure of how many irrelevant statements the informant made), the correctness of the informant's punctuation and the number of previously made false claims by this person. Unlike the method described in Diller (2002), Lindsay's system gave numerical values to these factors. Then, in the manner of evolutionary computation, initial rule sets were generated randomly and then, over a period of several generations, more successful rule sets were produced. The performance of the final set of rules was only slightly worse than that of a human evaluator. Because this was a small-scale project, it would be inappropriate to base too much on it. The results obtained appear quite promising, however, and I hope that this will encourage others to pursue similar projects.
References
- Antoni Diller, "A Model of Assertion Evaluation", Cognitive Science Research Papers, School of Computer Science, University of Birmingham, CSRP-02-11 (November 2002); a PDF version of this paper is available on this website.
- Jonathan Lindsay, "The Electric Monk: A Belief Filtering System Based on Defeasible Rules", BEng project dissertation, School of Computer Science, University of Birmingham, April 2004.
- Antoni Diller, "Testimony from a Popperian Perspective", Philosophy of the Social Sciences, ISSN 0048-3931, vol. 38.4 (2008), pp. 419–456. Subscribers to Philosophy of the Social Sciences can read "Testimony from a Popperian Perspective" online.
© Antoni Diller (22 March 2014)