Daniel Billsus
Department of Information and Computer Science
University of California, Irvine
Irvine, CA 92697-3425
dbillsus@ics.uci.edu
Research on intelligent information agents has recently attracted much attention. As the amount of information available online grows with astonishing speed, people feel overwhelmed navigating through today's information and media landscape. Information overload is no longer just a popular buzzword, but a daily reality for most of us. This leads to a clear demand for automated methods, commonly referred to as intelligent information agents, that locate and retrieve information with respect to usersí individual preferences.
As intelligent information agents aim to automatically adapt to individual users, the development of appropriate user modeling techniques is of central importance. Algorithms for intelligent information agents typically draw on work from the Information Retrieval (IR) and machine learning communities. Both communities have previously explored the potential of established algorithms for user modeling purposes (Belkin et al. 1997; Webb 1998). However, work in this field is still in its infancy and we see "User Modeling for Intelligent Information Access" as an important area for future research.
Our recent work has focused on the application of machine learning algorithms to user model acquisition from labeled text documents. Specifically, we have built an intelligent information agent designed to compile a daily news program for individual users (Billsus and Pazzani, 1999a/b). In this context we have identified a number of research issues that have not received much attention in previous work on user modeling for personalized information access. Here, we briefly identify these issues and present short summaries of our work in this area.
Intelligent agents for information access are typically aimed at assisting a user in his search for interesting or useful information. A large variety of agents that make use of machine learning techniques have been developed and presented in the literature (e.g. Pazzani and Billsus, 1997). Most of this work focuses on the acquisition of a precise model of the userís information need. However, in order to build truly useful information agents we also need to be aware of the userís knowledge. This allows to distinguish between information that a user finds interesting, but is already aware of, and information the user finds interesting but doesnít know. While there has been some work on explicit models of the userís knowledge in the user modeling community (in particular student modeling), this aspect has received virtually no attention in the Information Retrieval community. Furthermore, it is important to take into account that a user's information need changes as a direct result of interaction with information (Belkin, 1997).
In Billsus and Pazzani (1999b) we address this issue by explicitly storing information the system has recently presented to the user. Using a Nearest Neighbor approach, we limit presentation of new information to text documents (in our application news stories) that fall into an area that is close to documents the user has previously indicated as interesting, but is not too close to individual stories the system has presented before.
Systems that acquire user models from text documents typically require users to explicitly label text as either interesting or uninteresting, or to assign a relevance score on a certain scale. If we consider an intelligent information agent to be a personal assistant that gradually learns about our interests and retrieves interesting information, it would only be natural to have more expressive ways to communicate our preferences. For example, we might want to tell the agent that we already know about a certain topic, request additional information related to a certain topic, or ask for an explanation of the underlying reasons that have led to a certain recommendation.
In Billsus and Pazzani (1999a/b) we explicitly distinguish between documents that are labeled as not interesting and documents labeled as already known. This allows us to avoid presenting information similar to information labeled as known, while a known story does not affect the current model of the userís interests. Furthermore, by identifying previously rated content or informative words present in recommended documents, the system can construct simple explanations for its recommendations. Explanations provide direct insight into the induced user model and allow the user to assess whether the specific aspect of the user model that led to a certain recommendation is useful for finding relevant information (see Section 2.3).
Applying machine learning algorithms to infer a userís interests is a difficult task, because we cannot assume that the userís interests are a static concept which can be modeled with steadily increasing accuracy as the user provides more information. Learning algorithms applied to this task should be capable of adjusting to the userís changing interests quickly, even after a long preceding training period.
In Billsus and Pazzani (1999a/b) we address this problem in two different ways. First, we induce a hybrid user model, consisting of separate models for the userís long-term and short-term interests. The short-term model is based on information recently rated by the user and uses a Nearest Neighbor approach to classify new information. This allows for rapid adaptation to the usersí changing interests, as only a single labeled document is needed to identify future documents on similar topics. The long-term model is based on a naïve Bayesian classifier, and typically uses data collected over a longer period of time. It is used to compute probabilities of the relevance of documents in cases where the short-term model does not contain enough information to accurately classify a document.
Second, we incorporate our systemís ability to construct explanations into the learning process. The system allows users to critique the formed explanations, i.e. users can indicate whether the same line of reasoning should be reused to present or filter out future documents. Since the system's explanations correspond directly to specific "concepts" represented in the user model, this form of feedback allows for direct changes to the induced model. For example, consider a case where a user has previously indicated interest in a certain topic. The system can now identify future documents on a similar topic, and explain the reason for presenting these documents by referencing a previously rated document. The user can now critique the explanation formed by the system, indicating whether the same reason should be reused or avoided for future recommendations. While this approach requires more work from the user, it can lead to more flexible and accurate user models, as well as a reduction of required training data to achieve a certain level of predictive accuracy.
In this position paper we have identified three research issues we consider important areas of future work on automated induction of user models from labeled text documents. We have motivated the importance of explicit models of the userís knowledge, expressive forms of human-computer interaction, as well as the need for flexible user models that adapt to usersí changing interests. Brief summaries of approaches we used in recent work served to illustrate initial steps towards improved user model acquisition from labeled text documents.
Billsus, D. and Pazzani, M. (1999a). "A Personal News Agent that Talks, Learns and Explains", Proceedings of the Third International Conference on Autonomous Agents (Agents '99), Seattle, Washington, May 1-5, 1999.
Billsus, D. and Pazzani, M. (1999b). "A Hybrid User Model for News Story Classification", Proceedings of the 7th International Conference on User Modeling (UM í99), Banff, Canada, June 20-24, 1999.
Belkin, N. (1997). User Modeling in Information Retrieval, http://www.scils.rutgers.edu/~belkin/um97oh/, Tutorial Overheads, Sixth International Conference on User Modeling, Chia Laguna, Sardinia, 1997.
Belkin, N., Kay, J., Tasso, C. (eds) (1997). Special Issue on User Modeling and Information Filtering. User Modeling and User Adapted Interaction, vol 7(3), Kluwer Academic Publishers.
Pazzani M., and Billsus, D. (1997). Learning and Revising User Profiles: The identification of interesting web sites. Machine Learning 27, 313-331.
Webb, G. (ed) (1998). Special Issue on Machine Learning for User Modeling. User Modeling and User Adapted Interaction, vol 8(1-2), Kluwer Academic Publishers.