Paul De Bra*
Department of Computing Science
Eindhoven University of Technology
PO Box 513
NL 5600 MB Eindhoven
The Netherlands email@example.com
Department of Romance Languages
University of Antwerp
B 2610 Wilrijk
*Paul De Bra is also affiliated with the University of Antwerp and the "Centrum voor Wiskunde en Informatica" in Amsterdam.
Abstract: Since early 1994 the course ``2L670: Hypermedia Structures and Systems'' has been available through the Web. It is currently part of the curriculum for computing science and related fields at six universities in The Netherlands and Belgium, and occasionally offered to students from other institutes as well. The software used to deliver this course over the Web has evolved from a static hyperdocument to a versatile adaptive hypermedia system that can be used for many purposes. We call the system AHA, which stands for Adaptive Hypermedia Architecture.
The core of the AHA system consists of an engine which maintains a user-model based on knowledge about concepts. Knowledge is generated by reading pages and by taking tests. The (textual or multimedia) content of a page can be adapted by means of fragment variants. The (hyper)links are annotated by changing the color of the link anchor (the link text or the border in case of images). The color scheme can be configured by the author and overridden by the user, to choose between link annotation and link hiding. When desired, link removal can also easily be implemented.
The adaptive hypermedia software can be used for all kinds of applications, not necessarily limited to education (which is what its primary purpose was). It is written (almost) entirely in Java and thus portable to different computing platforms. It is freely available for non-commercial use.
keywords: user modeling, conditional content, link hiding, link annotation.
Several adaptive hypermedia applications have been introduced over the past few years. The overview article by Brusilovsky [B96] names most of these, and describes the adaptive techniques used in each application. In most cases the software used for maintaining the underlying user models and for generating the adaptive content and link structure is tied closely to the single application for which that software was developed. A notable exception is the Interbook system [BSW96b], which is a descendant from ELM-ART [BSW96a]. While ELM-ART is only a Lisp course, which incidentally also uses Lisp for the software that maintains the user-model and generates the adaptive content, Interbook can be used to create courses on different topics. Still, although more general than ELM-ART, Interbook is still aimed specifically at educational applications. It uses a fixed frames structure to represent an (adaptive) table of contents, a set of known or required concepts, a content page, etc. The front-end of such a system (the user-interface which is realized using HTML) is tied closely to the back-end (the engine which maintains the user model and performs the adaptation).
This paper presents the development at the Eindhoven University of Technology [DC97], which goes one step further: the software for adaptive hypermedia which was originally developed for a course on ``Hypermedia Structures and Systems'' [DB94] has been made very generic by concentrating all functionality in the back-end, and leaving presentation issues to the author. The adaptive system, still called AHA (for Adaptive Hypermedia Architecture), can be used with HTML presentations with or without frames. The engine which maintains the user model can be used to generate conditional text and to adapt the link structure through link removal, link hiding and link annotation. In [B96] six application areas for adaptive hypermedia are mentioned. AHA supports five of them: on-line information systems, on-line help, educational hypermedia, institutional hypermedia and personalized views. The one application area which is somewhat more difficult to realize using this software is information retrieval hypermedia (because that requires sorting of links, which AHA does not support).
Section 2 presents the structure of a user model and how it is used to generate adaptivity. In Section 3 we show how adaptive content can be realized using standard HTML. Section 4 illustrates the different ways to adapt the link structure through the use of link classes. In Section 5 we present the global architecture of the back-end and show how this engine interacts with the browser to ensure that presented pages always correspond to the correct user model.
Adaptive systems try to anticipate the needs and desires of the user. Any knowledge a system has about the user is based on that user's (prior) actions. The system may simply monitor what a user is doing or it may ask questions. Intelligent tutoring systems are typical: they build a user model based on what reading material is offered to the user, and validates that model by means of (mostly multiple-choice) tests. A simple way to represent a user model is by means of a set of pairs (c,v) where c is a concept and v is a value which indicates how much (or little) the user knows about that concept. However, a set of subject-value pairs need not represent knowledge about a concept. User preferences can be represented in the same way. Authors can offer variants of hyperdocuments by presetting some pairs. In the sequel we will always use the term concept, keeping in mind that it may mean preference as well.
The data types used for indicating knowledge (or preferences) are simple: Some systems allow many values, for instance a ``percentage'' (or integer values between 0 and 100), some have a few numeric or named values, like no knowledge, read about and knows about, and others have just Booleans (true and false). Some applications use only a few concepts (which they have knowledge tests for) while others use many concepts (maybe even one for every page).
Every representation system with a finite number of values can be simulated by a system with just Booleans. For systems with many values, like percentages, such a simulation would be impractical. However, we know of no system which actually calculates percentages to the point that more than a few discrete values can be obtained, or to the point that every discrete value has a different influence on the adaptation. When a system supports three or four values per concept, each concept can be replaced by two. For instance, instead of concept something we could create concepts read-about-something and knows-about-something. In the AHA system we opted for Boolean values.
In AHA knowledge about a concept is generated either by reading a (single) page or by taking a test. This implies that concepts are fairly fine-grained: if the user must read five pages to achieve some desired state, each of the five pages must be associated with a different concept, and the five concepts together define the desired goal.
Apart from a set of known concepts, the AHA software also maintains a logfile for each user. For each time the user accesses a page there are two log entries: one for the start and one for the end of the period the user (supposedly) reads the page. (In this way the reading time for each page is logged.) For each test the score is stored as well. The log is part of the user model so it can be used to mark links to pages the user already read differently from links to unread pages. Most WWW-browsers also change link colors, but this behavior would interfere with the adaptive linking. Also, users may be able to clear the browser's history and associated link coloring scheme and thereby disable the guidance the adaptive hypertext software tries to provide through link colors (see Section 3).
While concepts can be used to represent user preferences the AHA system implements color preferences differently. Colors of links can be selected by the user. A Boolean representation would not be desirable, so explicit color values are stored in the user model.
Depending on the user's knowledge state information on a given subject may need to be presented in different ways. Students who are first reading about hypertext for instance may be confused when they see the term ``node'' whereas the word ``page'', used in the same context, would be meaningful to them, and probably sufficiently accurate in an introductory text. In the course text for the course ``2L670: Hypermedia Structures and Systems'', the students must first visit a ``readme'' page with instructions on how to use the courseware and how to configure their WWW-browser. Therefore a short paragraph which tells students to go to the readme page is prominently displayed, along with a link to that page. After reading the instructions the textual content of the start-page of the course is changed automatically. The pointer to the readme page is removed from the top, and a small reminder at the bottom of the page is all that remains.
AHA implements adaptive content in HTML by means of a preprocessor that filters content fragments by means of conditionals encoded in structured HTML comments:
<!-- if definition and history --> This part appears if the two "concepts" definition and history are both known according to the user model. <!-- else --> If this is not the case then this alternative is presented instead. <!-- endif -->Another example is the use of such conditionals is to combine viewgraphs with and without comments in a single source, like:
<LI>This is a viewgraph item. <!-- if verbose --> And this is some additional comment which is only shown when "verbose" is true. <!-- endif -->Similar constructs can be used to choose between a text paragraph, an image, video, etc.
Because the comments have no ``meaning'' in HTML, there is no need to respect the proper nesting of conditionals and HTML tags. The following example would not be allowed if the comments would be meaningful HTML tags:
<LI>This is an item in an unordered list. <!-- if interrupt-list --> </UL> This conditional text interrupts the list. <UL> <!-- endif --> <LI>This is the next item in the list.This flexibility does imply that the author must ensure that the proper begin and end tags for lists or other HTML constructs are included under all circumstances.
Depending on the user model an adaptive hypertext system will guide the user towards some "desired" pages, and away from pages that contain information which is not relevant at that time, or for which the user does not have the necessary foreknowledge. Brusilovsky [B96] distinguishes five types of adaptive linking: direct guidance, adaptive link sorting, adaptive link annotation, adaptive link hiding and map adaptation. Most adaptive hypertext systems offer only one (or maybe two) of these features. Interbook [BSW96b] for instance concentrates on link annotation, meaning that desired links are marked differently from undesired links, in this case by means of a colored (big) dot. (It also offers direct guidance through a ``teach me'' feature.)
The course 2L670 offered only link hiding, but in AHA this restriction has been lifted. Calvi [C98] has subdivided link hiding into three subclasses:
The latest version of AHA directly supports link annotation and link hiding, but link removal can easily be simulated, and direct guidance can be implemented with a little more authoring effort.
<!-- if desired --> <a href="..."> <!-- endif --> here is the link anchor text <!-- if desired --> </a> <!-- endif -->This trick, with the two ``if'' clauses is only needed if you wish to avoid writing the link anchor text twice. A simpler, but redundant form would be:
<!-- if desired --> <a href="...">here is the link anchor text</a> <!-- else --> here is the link anchor text <!-- endif -->
<!-- if definition --> <a href="theorem.html"> <!-- else --> <a href="definition.html"> <!-- endif --> next </a>Offering direct guidance is difficult because the author needs to be aware of what the possible best pages to go to are, depending on each user's knowledge state. Although this may seem to be an authoring problem and not a problem with the AHA software, direct guidance can be delivered automatically (to some extent) by a system by checking the dependencies between concepts. AHA currently does not offer this feature, but Interbook for instance does (through a ``teach me'' button).
The AHA system delivers HTML pages that consist of four parts:
Each HTML page the author creates (optionally) starts with two comments:
<!-- requires readme and history but not (intro or definition) -->
<!-- generates history -->
the AHA script recognizes three types of links:
The user can change this color scheme through a ``setup'' page. Also, a list of pages that have been read and a list of pages still to be read can be displayed. Multiple-choice tests are available in two versions: one that requires the user to answer correctly (and offers no explanatory feedback to wrong answers) and one that offers explanations of errors and does not require multiple tries. All these features are available through the same CGI-script that generates the pages.
When a user ``logs on'' (by filling out a form) the system generates URLs to the pages that contain the name of the CGI-script that generates pages, the identity of the user, and the name of the requested page. Thus a URL may look like:
http://servername.domain/2L670/cgi/get.cgi/user-id/page.htmlAlthough pages are individualized, they are currently not password protected. (The login form requires a password only to prevent users from accidentally logging on as another user. This may happen occasionally because student's use numbers as their user-id.)
The AHA software is freely available upon request. It is written entirely in Java (1.1) and should be easy to integrate into any Unix-based Web-server that supports CGI or Fast-CGI. (Porting to other platforms may be a bit tricky because of the use of a few Unix utilities to generate the dependency file and concept list, and of a shell script to read environment variables, which is not possible in Java 1.1.)
Visitors are welcome to a few applications of this software: ``http://wwwis.win.tue.nl/2L690/'' is the address of the most recent version of the course on ``Hypermedia Structures and Systems''. ``http://wwwis.win.tue.nl/2M350/'' is the address of a course on ``Graphical User-Interfaces'' which uses the adaptivity mostly to set a user-preference for viewgraphs versus full text. ``http://wwwis.win.tue.nl/IShype/'' is the address of a small set of information pages (in Dutch) about writing a master's thesis in our Information Systems Group. It illustrates the use of AHA in an application which is not an on-line course.
A final, but important remark is that, like probably all adaptive hypermedia systems, AHA does not reduce the need for skilled authors. While adaptive hypermedia in general, and AHA in particular, offers the possibility to avoid certain usability problems caused by the user taking paths through a hyperdocument which the author did not foresee, it certainly does not eliminate usability problems automatically. Although we have not done this, it is possible to base an adventure game on AHA, using the adaptive features to make it more difficult to find your way by changing the contents of pages and the link structure in ways which the player does not expect.
The adaptive hypermedia system AHA offers adaptive content through fragment variants and adaptive link presentation through link annotation, link hiding and/or link removal. Its focus is on simplicity and the use of standards. The AHA engine is written in Java, and works as a CGI or FCGI script. Hyperdocuments for AHA are written in standard HTML. HTML comments are used for delimiting fragment variants and link classes are used to perform the link annotation.
Keeping the AHA system simple, portable and based on standards comes at a price: there are currently no tools for supporting the design of an adaptive hyperdocument, including the definition of concepts and dependencies between concepts. Also, writing documents in HTML may be problematic for some authors. Tools for generating HTML from other document formats are available but not always sufficiently general to support the HTML features used by AHA.