AHA! A General-Purpose Tool for Adaptive Websites

Paul De Bra, Natalia Stash
Department of Computer Science
Eindhoven University of Technology
PO Box 513, 5600 MB Eindhoven
The Netherlands
+31 40 2472733


Many organizations having Websites acknowledge the need for personalization. The selection and presentation of information is based on a user model. This paper briefly presents AHA! (for Adaptive Hypermedia Architecture), a general-purpose server-side tool to make websites adaptive, meaning that the personalization is performed automatically, based on the pages visited by the user. AHA! was first created as a simple tool for adaptive on-line courses [4] but is now being turned into a powerful general-purpose tool thanks to a grant from the NLnet Foundation. Information about AHA!, including an adaptive tutorial, can be found on the AHA! website http://aha.win.tue.nl/.


adaptive hypermedia, user modeling, Web personalization, Web design.


World Wide Web has become the prime publication medium for many organizations, companies and individuals. Developing large websites requires careful design, planning and implementation. Personalization has become an essential part of this process. In this paper we concentrate on automatic personalization. The resulting websites are often called adaptive. Many adaptive Web-based applications exist to date. Papers like [1,3] and conference and workshop proceedings on adaptive hypermedia and adaptive Web-based systems list many of them. There are two approaches for creating adaptive Websites: in [6] an attempt is described to improve the organization and presentation of a website by learning from visitor access patterns; in our research and also in systems like Interbook [2], the adaptation is done based on rules and conceptual structures that are defined by an author.

Most adaptive applications are built using special-purpose (server-side) software. Interbook for instance [2] is a tool for adaptive courseware. It has a fixed presentation form using several frames, an adaptive table of contents, etc. It performs link adaptation (annotation) based on prerequisite relationships. Using this system for other applications such as information kiosks, corporate websites, museum websites or on-line mail-order catalogs, would not be feasible. Nevertheless, the basic kinds of user modeling and adaptation required in all these applications is not all that different. One only needs more types of relationships between concepts or information items, and more presentation freedom. With the AHA! system we try to deliver a general-purpose Web-based adaptive hypermedia system, by allowing many types of adaptation rules and not enforcing any, and by allowing complete presentation freedom. In this paper we first describe the main features of the (new) AHA! system, and then we point out some research issues.


AHA! is prototypical for Web-based server-side adaptive engines:

  1. The system is "triggered" when the user accesses (and retrieves) a page.
  2. A user model is used to determine how the page needs to be adapted.
  3. The user model is updated, based on the page access.
  4. The adapted page is sent to the browser.

AHA! works this way through the use of Java Servlets that adapt local or remote webpages to a user model. The main features of AHA! are:


Several document formats have been used for AHA! in the past. All were based on HTML. The first generations used HTML comments for items used by the adaptation engine, for instance the "if" statements to conditionally include fragments. Later the format was changed into XML. The content of the pages had to be made opaque to the XML parser so that only the AHA! tags would be seen. <![CDATA[...]]> constructs are now used to avoid HTML tags from disturbing the XML parsing. We are currently experimenting with modularized XHTML, which lets you add a module for the AHA!-specific tags. Using this technology also enables the use of AHA! for other XML-based languages. We are looking into the possibility of combining SMIL with AHA! to create adaptive multimedia presentations. Performance is still an issue with the modularized XHTML experiment. Existing AHA! applications do not produce a noticeable overhead. (Serving most pages takes less than 0.1 second, and really long pages still takes less than 1 second.) With the modularized XHTML (and the Xerces parser) the processing overhead is much larger because of the parsing time for the XHTML DTD. We trust that the XML parsing technology will improve, possibly through pre-compilation of DTDs, to eliminate this problem in the future.

The AHA! rule system for updating the user model is very versatile. A page access triggers the generate rule for that page, which updates certain attributes of some concepts. These updates trigger rules of attributes of other concepts, etc. The behavior of such a rule system is very much like triggers in active database systems. We have studied condition-action rules in our AHAM reference model [5,7]. Potential problems of these rule systems include (non-)termination and (non-)confluence. It is possible to detect infinite loops at runtime. This is not useful however, because runtime checks mean that when the system "misbehaves" the end-user receives the coresponding error messages.

In [7] we have shown how to predict termination and confluence problems during the authoring process, using static analysis techniques. An AHA! authoring tool for the concept structure (with the requirement and generate rules) is currently under development. This tool will include static analysis checks to help authors in creating valid rule sets.

The adaptation offered by AHA! consists of link hiding or annotation, and the conditional inclusion of fragments. While this may be sufficient for most manually authored pages, it is not sufficient for applications that require a more dynamic creation of pages. For instance, it is not (yet) possible to sort a list of fragments according to a (user-dependent) relevance value. More complex "page constructors" will be investigated in the future.


AHA! makes it possible to create websites in which links can be hidden or annotated and fragments can be included or omitted, based on adaptation rules and a user model with a very flexible structure (condition-action rules working on a structure with concepts and attributes).

AHA! is general-purpose in the sense that it does not enforce a certain presentation style and that adaptation can be based on arbitrary events and dependencies between concepts (unlike for instance learning systems that assume a monotonic process of gaining knowledge through reading pages). At the moment however AHA! is still fairly simple. It only works with manually authored rules, and conditional inclusion of fragments is the only type of content adaptation it supports.


The development of AHA! is made possible by the NLnet Foundation. Documentation and new releases can be found on the website aha.win.tue.nl.


  1. Brusilovsky, P., Methods and Techniques of Adaptive Hypermedia. User Modeling and User-Adapted Interaction, 6, pp. 87-129, 1996.
  2. Brusilovsky, P., Eklund, J., Schwarz, E., Web-based education for all: A tool for developing adaptive courseware. Computer Networkds and ISDN Systems (Proceedings of the Seventh International World Wide Web Conference), 30 (1-7), pp. 291-300, 1998.
  3. Brusilovsky, P., Adaptive hypermedia. User Modeling and User Adapted Interaction, 11 (1/2) pp.87-110, 2001.
  4. De Bra, P., Calvi, L., AHA! An open Adaptive Hypermedia Architecture. The New Review of Hypermedia and Multimedia, pp. 115-139, 1998.
  5. De Bra, P., Houben, G.J., Wu, H., AHAM: A Dexter-based Reference Model for Adaptive Hypermedia. Proceedings of ACM Hypertext'99, Darmstadt, pp. 147-156, 1999.
  6. Perkowitz, M., Etzioni, O., Towards Adaptive Web Sites: Conceptual Framework and Case Study. Proceedings of WWW8. 1999.
  7. Wu, H., De Kort, E., De Bra, P., Design Issues for General Purpose Adaptive Hypermedia Systems. Proceedings of the 12th ACM Conference on Hypertext and Hypermedia, pp.141-150, 2001.