AHA! An open Adaptive Hypermedia Architecture

Paul De Bra*
Department of Computing Science
Eindhoven University of Technology (TUE)
PO Box 513, Eindhoven
The Netherlands

debra@win.tue.nl
Licia Calvi
Department of Computer Science
Trinity College
Dublin 2
Ireland

Licia.Calvi@cs.tcd.ie

*Paul De Bra is also affiliated with the University of Antwerp, and with the "Centrum voor Wiskunde en Informatica" in Amsterdam.

Abstract: Hypermedia applications generate comprehension and orientation problems due to their rich link structure. Adaptive hypermedia tries to alleviate these problems by ensuring that the links that are offered and the content of the information pages are adapted to each individual user. This is done by maintaining a user model.

Most adaptive hypermedia systems are aimed at one specific application. They provide an engine for maintaining the user model and for adapting content and link structure. They use a fixed screen layout that may include windows (HTML frames) for an annotated table of contents, an overview of known or missing knowledge, etc. Such systems are typically closed and difficult to reuse for very different applications.

We present AHA, an open Adaptive Hypermedia Architecture that is suitable for many different applications. This paper concentrates on the adaptive hypermedia engine, which maintains the user model and which filters content pages and link structures accordingly. The engine offers adaptive content through conditional inclusion of fragments. Its adaptive linking can be configured to be either link annotation or link hiding. Even link disabling can be achieved through a combination of content and link adaptation. We provide three examples of different applications that are all based on the same engine: an adaptable course text where full text and viewgraphs are integrated, a fully adaptive course on the subject of hypermedia (using a simple presentation without HTML frames) and a "kiosk" information system (with frames and JavaScript).

Keywords: adaptive hypermedia, user model, conditional content, link hiding, link annotation, standard HTML

1. Introduction and Background

Authoring usable hypermedia documents has turned out to be more difficult than anyone (we know of) ever anticipated. Several researchers have already indicated that coherence and (avoidance of) cognitive overhead are important issues that make authoring difficult (1), (2) Two severe usability problems in hypermedia are (content) comprehension and orientation:

Adaptive hypermedia (or AH for short) tries to overcome these problems through the use of techniques from the area of user modeling and user-adapted interactive systems. (Many interesting research papers on AH appear in the Journal "User Modeling and User-Adapted Interaction" which is published by Kluwer Academic Publishers. We refer to several of these publications in this paper.) An adaptive hypermedia system (or AHS for short) builds a model of the "mental state" of the user, partly through explicit directives given by the user, and partly through observing the interaction between the user and the system. User preferences and explicit indications of (fore)knowledge provide information to make the system adaptable, while observing the user, often without that user even being aware of it, enables the system to ensure that the user feels comfortable because all hypertext links always lead to interesting and relevant information which the user (supposedly) is able to understand. This automatic adaptation is what makes the system adaptive. To reach this goal the system adapts the contents of each node to the appropriate level of difficulty, adding explanations where needed, avoiding technical terms that cannot (yet) be understood, removing irrelevant detail, etc. The system also ensures that the user is guided towards links that lead to more relevant, interesting and understandable information.

The above claim sounds too good to be true, and to some extent this is justified criticism: AH provides a framework that makes it possible to alleviate the common comprehension and orientation problems, but in order to actually achieve this goal one still needs a skilled author. As De Young (3) states:

"(...) if a hypertext is a densely connected unstructured graph, no presentation of the mess is going to clean it up or make it conceptually easier to understand."
In this paper we present (part of) AHA, an open Adaptive Hypermedia Architecture for generating Web-based applications. In particular we focus on the adaptive hypermedia engine of AHA, which maintains the user model and performs the adaptive filtering of information contents and link structure. AHA is a tool for creating adaptive hypermedia applications, but not a magic wand that turns any given hyperdocument into an easily usable adaptive hypermedia application.

Many different definitions of and techniques for adaptive hypermedia exist. A good overview is presented by Brusilovsky (4). This overview characterizes 29 different adaptive hypermedia systems as to their type of application and the adaptive techniques they use. The application areas are educational hypermedia systems, on-line information systems, on-line help systems, information retrieval hypermedia, institutional hypermedia and personalized views. Only 5 of the systems were classified under more than one of these areas, among which 4 are both an on-line information system and an information retrieval hypermedia system. The only system fitting more than one other category is Hypadapter (5), an "Adaptive Hypertext System for Exploratory Learning and Programming", which Brusilovsky classifies as both an educational hypermedia system and an on-line information system, but which in our opinion is much more of the first than of the second category.

AHA is (an extension of) the architecture and AH engine used for the on-line course "Hypermedia Structures and Systems" (hereafter called "the hypermedia course"), developed by the first author, and offered to students at several universities in The Netherlands and Belgium. We often refer to this course as "2L670", its original TUE course code (or 2L690, its current code). AHA maintains a simple user model consisting of a set of boolean variables. In the original application these variables are used to represent the knowledge of a user by means of concepts. The values true and false for a concept mean that the concept is known or not known. However, the boolean variables may be used for any desired purpose. Some variables represent the outcome of a test, so their values indicate passed or failed. In a second course (on Graphical User-Interfaces) we use a variable "verbose" that can be toggled to select between full text and viewgraphs. In our third application, an on-line information kiosk for informing students about the possibilities and rules for doing a masters thesis in the Information Systems Group, variables are simply used to determine the progress of the user, in order to decide whether enough of the material has been visited to allow the user to sign up for an interview.

AHA separates the AH engine from the user-interface design, unlike most other AHS which are complete systems that are strongly tied to a single application or application area. As the cited hypermedia course is already a few years old (and was not adaptive at first), its presentation uses a simple page-based model, without HTML frames (introduced by Netscape and only recently incorporated into the HTML-4.0 standard). The information kiosk, IShype, uses frames to present a table of contents and an information page simultaneously, and JavaScript to keep the two frames in sync with each other. The AHA engines serving the pages for the two courses and the kiosk are identical (and in fact it's just a single engine).

In AHS most of the content and the link structure are authored by a human author. There is also a generation of so called dynamic hypertext systems. These systems use natural language techniques to generate nodes (with "fluent English" content) and links on the fly. Dynamic hypertext systems like Peba-II (6) and ILEX (7) are also adaptive, meaning that the nodes and links they generate depend on the user's browsing history, just like in "plain" AHS. To some extent, adaptive dynamic hypertext systems are offering automated skilled authors. We do not go into details of dynamic hypertext systems in this paper. An extensive overview is given by Milosavljevic (8).

The remainder of this paper consists of 4 sections. Section 2 briefly recalls and extends the definitions of different AH techniques as given in (4), (9) and (10). Section 3 describes the HTML elements used for adaptivity, and how the AHA engine performs its adaptive functions. Examples are taken from our three applications. Section 4 describes the (lower level) access to the AHA engine that allows other applications to query and update the user model. This demonstrates why we call AHA an open AH architecture, even though it is not an "Open Hypermedia System" in the sense of e.g. publications from (11). Section 5 concludes and presents future work.

2. Adaptive Techniques used in AHA

Brusilovsky (4) distinguishes two main categories of adaptivity:

AHA offers both categories of adaptivity, but only certain options. The textual content of pages is adapted based on the user model. AHA does this through HTML constructs described in Section 3. It has no built-in support for adapting the content of non-textual information items (but neither has any other operational AHS). Achieving direct guidance, adaptive link sorting or map adaptation in AHA would be non-trivial to do. (We have not tried it in any of our applications.) However, the ways in which AHA can adapt links are more versatile than plain adaptive link annotation or adaptive link hiding. AHA makes adaptive link disabling and adaptive link removal possible. It is the only AHS we know of that supports these four ways of link adaptation.

2.1 Adaptive content presentation

A hyperdocument contains a body of information, divided into logical (and physical) chunks, called nodes. The order in which the nodes are presented depends on the links a user chooses to follow. The goal of adaptive content presentation is to ensure that each node is presented in an appropriate way for each individual user. In order to achieve this goal it may be necessary to add some extra (explanatory) information, remove some (no longer relevant) information, or change the wording.

AHA provides adaptive content presentation through conditional fragments, which is the low level way to realize fragment variants, a technique to implement the explanation variants method (4). We use the term conditional fragments to indicate that we have pieces of content that are included when a certain condition is met. In AHA such fragments are pieces of HTML text (possibly with included objects like images or applets). Our term should make it clear that conditions may not only result in alternative presentations of a piece of information, but also in hiding it altogether. Below is an example from the hypermedia course. The fragment is taken from a page describing the meaning of a URL. The concept of a URL is related to the addressing scheme of Xanadu. There are three variations of the fragment:

  1. When the fragment is read before the user has reached the chapter on the "history of hypermedia", the fragment looks like:
    ...
    In Xanadu (a fully distributed hypertext system, developed by Ted Nelson at Brown University, from 1965 on) there was only one protocol, so that part could be missing. Within a node every possible (contiguous) subpart could be the destination of a link.
    ...
    (In this example the word "Xanadu" appears in black but is actually a link anchor.)
  2. When the user has read some pages of the chapter on the "history of hypermedia", but not yet the page on Xanadu, the fragment is essentially unchanged but a link to a page on Xanadu becomes apparent:
    ...
    In Xanadu (a fully distributed hypertext system, developed by Ted Nelson at Brown University, from 1965 on) there was only one protocol, so that part could be missing. Within a node every possible (contiguous) subpart could be the destination of a link.
    ...
    (On the screen the word "Xanadu" would appear in blue instead of bold.)
  3. After reading the page on Xanadu (to which the blue link points), the color of the link to the Xanadu page changes to purple. The short explanation on Xanadu is removed because the user already knows much more about Xanadu:
    ...
    In Xanadu there was only one protocol, so that part could be missing. Within a node every possible (contiguous) subpart could be the destination of a link.
    ...
    (On the screen the word "Xanadu" would appear in purple instead of italics.)

Some systems, like Anatom-Tutor (12), Lisp-Critic (13), Hypadapter (5) and several others use the user model to select whole page variants instead of just fragment variants. It is of course possible to create page variants by means of "large" fragment variants. The conditional fragments in AHA may be arbitrarily large.

When conditional fragments are only used to conditionally include pieces of content, and not to provide alternatives to fragments, the technique is called stretchtext. This technique is used in MetaDoc (14) and KN-AHS (15). It is easy to implement the "initial appearance" of stretchtext in AHA. In fact, this is done in the above example. This technique is used a lot in our course text for "Graphical User-Interfaces" that combines viewgraphs with full-text:

Stretchtext also implies that the user can open and close fragments at will, a feature which cannot (yet) be realized in AHA. This "open and close" technique was first introduced as replacement links in the Guide hypertext system (16). In Guide text fragments open up when clicking on a link, and can be closed with a mouse-click as well. However, in Guide no fragments are "pre-opened" upon loading a page.

When we claim that AHA implements fragment variants, page variants and (at least part of) stretchtext we mean that these forms of adaptation can be performed easily through the use of conditional fragments. This can be done without including a fragment more than once in the (HTML) source text of a page. An adaptive technique that requires nodes to be assembled from small pieces of information, like slots in the frame based technique used for EPIAIM (17), would be difficult to implement in AHA. Frame based techniques are typically used in dynamic hypertext systems, like Peba-II (6) and ILEX (7). The technique determines not only which fragments (slots) to present, but also in what order. Turning a selected (or generated) order requires natural language techniques to make the sequence of text fragments sound natural. Simulating just the sorting part of this technique in AHA would require fragments to be repeated in the source of nodes. (The number of repetitions increases with the number of different orders in which the fragments can be presented.)

2.2 Adaptive link presentation

Links can be adapted in many ways:

We only consider the adaptive link presentation techniques from (4), (9) and (10) that do not involve (major) content adaptation in order to realize them.

The decision whether a link is relevant or not (or maybe something in between) is based on the user model and on the link itself. We distinguish two possibilities:

Apart from adaptive link presentation AHA also supports plain, non-adaptive links. This enables an author to create both an explicit static link structure that is always present and an adaptive link structure that provides additional access to nodes depending on the user model.

3. Authoring Adaptive Hyperdocuments for AHA

AHA is an open adaptive hypermedia architecture for the Web (but not an Open Hypermedia System in the usual sense of this term (11)). While many AHS use a WWW-browser as their user-interface platform, most of them require the author of the hyperdocument to create nodes and links using non-Web-based languages and tools. AHA is completely Web-based in the sense that it only uses "standard" Web languages and tools. This section describes the language used for authoring. Section 4 describes how the AHA engine is constructed out of popular Web components.

Nodes (and links) for AHA are written in standard HTML. They can be authored using any of the many available HTML authoring tools that are available, using a plain text editor, or using some word processors that are able to generate HTML. The two features of an HTML editor that are essential for AHA are:

Most HTML editors, including Netscape Composer and HoTMetaL Pro (which we tested) have this ability. (Unfortunately, some other tools that are used for generating HTML, like MS-Word'97 are not suitable.) Our main motivations for using only standard HTML are that we consider it unacceptable to enforce a special-purpose editor upon authors who most likely already use tools for editing or generating HTML, and that we lack the resources as well as the ambition to write an HTML editor ourselves (which is something we would need to do if we wished to use extensions to HTML). A major burden with standard HTML is that link anchors have to be authored as part of the page contents. We hope that future languages for the Web will allow links to be separated from contents so that authoring for the Web will become more like authoring for Open Hypermedia Systems, in which links are typically not embedded within the pages.

3.1 General properties of nodes (pages)

HTML pages for AHA contain "structured HTML comments". An earlier version (22) used C-preprocessor "#if" constructs instead, much like the adaptive C-course described in (23).

Every node in an AHA hyperdocument starts with two HTML comments:

  1. The first comment contains a boolean expression using variable names (which are not allowed to contain white-space). The evaluation of this expression is used to determine whether links to this page are desirable or not. An example of such a first line is:
    <!-- requires readme but not ( intro and definition and history ) -->
    
    The word but is treated as a synonym for and but creates some more readable expressions. Expressions contain arbitrary combinations of and, or, not and parentheses, with the usual precedence rules. There is a special predefined expression true that is used to indicate that there is no condition for this node. A node without condition must contain the line:
    <!-- requires true -->
    
    The "requires comment" is mandatory. The next comment (see below) is optional.
  2. The second comment enumerates the (boolean) user model variables that become true after reading this node. Initially all variables are false.
    <!-- generates netscape-navigator internet-explorer -->
    
    This comment says that after reading this node the variables netscape-navigator and internet-explorer both become true (presumably meaning that the user has read something about Netscape Navigator and Internet Explorer). The current authoring language for AHA does not contain a construct for setting variables to false. Although this does not formally restrict the power of the language and system it may make some applications more difficult to create. This restriction is inherited from the first application of AHA, the on-line hypermedia course. There it was assumed that variables actually represent concepts, that the value true means that the student knows the concept, and that acquired knowledge is never lost. Because of this interpretation of the variables we called the system proficiency-adapted in (24).
    As we explain in Section 4 the AHA engine does offer the possibility to set variables to false, and the standard setup form enables users to set each individual variable to true or false through a series of checkboxes.

AHA also allows the generation of a header and footer for a page (through <!-- header --> and <!-- footer --> comments. The header and footer are small fragments of HTML source that can be included at the top and bottom of a page. The header is always followed by a line that provides feedback on what the user has read already. The figure below shows the header for the hypermedia course:

The header file contains the images, and is designed by the author of the hyperdocument. The line of text giving feedback on the progress of the user is generated by the AHA engine.

3.2 Developing a user model for AHA hyperdocuments

A user model in AHA consists of a set of boolean variables. Initially all variables have the value false. Reading a page or completing a test may change zero or more variables to true. While this is conceptually very simple, it may lead to very elaborate user models because of the generally large number of variables (or concepts). A user model for the hypermedia course for instance consists of 148 (boolean) variables. Unlike some adaptable systems that distinguish only a few stereotype users (like "novice", "intermediate" and "expert") the hypermedia course can represent 2148 different users. However, many of these user model instances can only be obtained by explicitly setting the user model through the setup form. The dependencies between variables (or pages) make that a much smaller number of variations can be generated through browsing (and taking tests) only. It may seem difficult to author an adaptive hyperdocument with 148 variables. For each page the author needs to consider which boolean expression in some or many of the 148 variables determines whether the page is desirable or not. Luckily the dependencies between variables greatly simplify this task.

The boolean model is somewhat limited. For instance, when a user reads an undesired page, AHA assumes that the information is not understood. No variables are changed to true. Still, the user is likely to have learnt something by reading the page. Interbook (20) distinguishes not two but four levels of knowledge about each concept (meaning unknown, learnt, well learnt and tested). A concept becomes learnt when read while the page is undesired. Reading desired pages makes a concept well learnt. But knowledge is only truly validated through a test. In AHA different variables are used for reading about a concept and taking a test about it.

While there may be valid arguments for the four-value approach of Interbook, others may argue for a three, five, six or still other numbers of values. Pilar da Silva (25) uses percentages (values between 0 and 100). Reading a page about a concept generates a certain percentage of the knowledge about that concept. Percentages offer a rich value domain, and may be useful for taking into account that a user may forget, i.e. knowledge may decay. (This is not used in (25).) But such a rich value domain creates serious authoring problems. Should percentages add up? Should they always add up or only when the author specifies it? Maybe pages have partly overlapping information so the percentages should only partly add up? Maybe the time spent reading a page should have an influence on the percentage learnt? While it is clear that the boolean model of AHA will need to be replaced in the future, it is not clear how to replace it by a much richer model without making authoring more difficult.

3.3 Link types

AHA distinguishes three types of links: conditional, unconditional and external links. Links in HTML are represented through the anchor tag. The two attributes of this tag that play an important role in AHA are the HREF attribute that defines the destination of the link, and the CLASS that defines the link type.

3.4 Conditional content

In order to sometimes show and sometimes hide a piece of content AHA uses special HTML comments that represent if statements. The if statements use the same boolean expressions as the requires line with which each node starts:

<!-- if not readme -->

  You must first read
  <a href="readme.html">the instructions</A>.
  These will explain how to use this course text.

<!-- else -->

  Now that you have read
  <a href="readme.html">the instructions</A>
  you can start studying this course.

<!-- endif -->
To an HTML editor or a WWW-browser comments have no meaning. They may be inserted anywhere (except inside an HTML tag). When considering if statements as real, meaningful tags one would find nesting errors, but since the statements are implemented through comments, these constructs are allowed. The following examples shows such a pseudo nesting violation, as it is used to simulate link disabling:
  The following words are
<!-- if not link-disabling  -->
  <A HREF="disabling.html" CLASS="conditional">
<!-- endif -->
  not always a link.
<!-- if not link-disabling  -->
  </A>
<!-- endif -->
Authoring conditional content needs to be done with care. Although comments have no meaning in HTML (and are interpreted by the AHA engine to conditionally include pieces of content) their use in AHA can easily be a source of HTMl syntax errors:

4. The architecture and implementation of AHA

The figure below shows the overall architecture of AHA. It shows which files play a role in assembling or filtering a single node, and also how the communication works between the client (browser), server, and AHA engine.

4.1 Representation of a hyperdocument

A hyperdocument in AHA consists of a directory hierarchy. The "root" directory contains a number of files and subdirectories. (The names can all be configured.)

4.2 The AHA engine

The AHA engine is a CGI or Fast-CGI script that performs the following functions (depending on given parameters):

The AHA engine roughly works as follows:

  1. When the engine receives a request it first determines which kind of action is required, based on the name of the document being requested. For instance, if the name starts with test___ it is treated as a request to generate a multiple-choice test. Names starting with eval___ are evaluations of tests; stop/ means a request coming from the stop applet, etc.
  2. The user model is retrieved, and together with the variable list a list of boolean variables can be built and initialized. The requirements list is read. It contains a list of all nodes and a copy of their "requires" comment. The complete history of all nodes read by the user is also retrieved.
  3. When a normal (HTML) node is requested, a content filter is applied first. Each if statement is evaluated, and the appropriate pieces of content are copied or not.
  4. Where appropriate, the header or footer is included. The header contains a message about the progress of the user. The number of nodes that have been read is deduced from the logfile, while the number of nodes that exist is deduced from the requirements list. The footer contains the stop applet, used for logging the times when a user stops reading each page.
  5. Then all relative links are parsed. (External links, that start with a protocol, like http: are not changed.) Links without a CLASS attribute, or of class unconditional are given the class good if they were not visited before, and neutral if they were visited. This results in the default colors blue and purple. Because the history from the log file is used instead of the browser's navigation history no link expiry mechanism of the browser has an influence on these link colors.
  6. For conditional links, the "requires" expressions of the destinations are extracted from the requirements list. If a link is of the class conditional, the link becomes of the class bad if the expression for its destination evaluates to false. Else it becomes neutral or good depending on whether it was followed or not, just like with unconditional links.

4.3 AHA as an open architecture

Although many adaptive hypermedia systems exist, they are all closed, in the sense that a user model, constructed through one system, cannot be reused by another system. Also, an AHS cannot inform another system about a user model or changes to a user model so that that other system could adapt its own model of the same user. Interbook (20) has a feature for sharing user models and connecting to "user model servers" (26), but not for communicating with "non-Interbook" systems.

AHA provides a way for other systems to retrieve a user model, and also to update the user model:

These methods are still primitive and far from perfect, but they form a first attempt towards opening up an adaptive hypermedia system.

Note that the "openness" of AHA does not make it an "Open Hypermedia System" as e.g. in publications from (11). A typical property of many open hypermedia systems is that links are not embedded within the nodes. AHA relies heavily on the use of HTML which has link anchors embedded within the pages.

4.4 Experience with the use of AHA in "real" applications

We have not yet performed extensive user evaluations of AHA. However, we do have some anecdotal information from authors and users:

5. Conclusions and Future Work

AHA, as an adaptive hypermedia architecture, is different from most adaptive hypermedia systems:

AHA is continuously under development. Extensions that are being considered or worked on are:

The biggest challenge AHS are faced with is how to ensure that adaptive hyperdocuments are more usable than static ones. Because AHA is a general architecture it cannot ensure usability. More research is needed to develop and evaluate tools for analyzing adaptive hyperdocuments in order to find potential usability problems. However, because AHA is flexible in that it makes it possible to provide versions with adaptive link hiding, annotation, disabling and/or removal or combinations thereof, it is a very suitable platform for experiments to evaluate the usability issues for each of these techniques.

6. References

[1]
CONKLIN, J. Hypertext: An introduction and survey. IEEE Computer, 20(9), 1987, 17-40.
[2]
THÜRING, M., J. HAAKE, J., HANNEMANN. What's ELIZA doing in the chinese room? Incoherent hyperdocuments - and how to avoid them. ACM Conference on Hypertext, 1991, 161-177.
[3]
DE YOUNG, L. Linking Considered Harmful. Designing and Reading Hyperdocuments. 1st European Conference on Hypertext, 1990, Cambridge University Press, 238-249.
[4]
BRUSILOVSKY, P. Methods and Techniques of Adaptive Hypermedia. User Modeling and User-Adapted Interaction, 6, 1996, 87-129. (Reprinted in Adaptive Hypertext and Hypermedia, Kluwer Academic Publishers, 1998, 1-43.)
[5]
HOHL, H., H.-D. BÖCKER, R. GUNZENHÄUSER. Hypadapter: An Adaptive Hypertext System for Exploratory Learning and Programming. User Modeling and User-Adapted Interaction, 6(2-3), 1996, 131-156.. (Reprinted in Adaptive Hypertext and Hypermedia, Kluwer Academic Publishers, 1998, 117-142.)
[6]
MILOSAVLJEVIC, M. and J. OBERLANDER, Dynamic Hypertext Catalogues: Helping Users to Help Themselves. Proceedings of the ACM Hypertext'98 Conference, 1998, 123-131.
[7]
OBERLANDER, J., M. O'DONNELL, A. KNOTT and C. MELLISH, Conversation in the museum: experiments in dynamic hypermedia with the intelligent labelling explorer, this issue.
[8]
MILOSAVLJEVIC, M., Electronic Commerce via Personalised Virtual Electronic Catalogues, Proceedings of the 2nd Annual CollECTeR Workshop on Electronic Commerce (CollECTeR'98), 1998. (URL: http://www.cmis.csiro.au/Maria.Milosavljevic/papers/collecter98/)
[9]
CALVI, C. A Proficiency-Adapted Framework for Browsing and Information Filtering in Web-Based Educational Systems. Methodological Implications for Language Learning on the WWW. Doctoral thesis, University of Antwerp (UIA), 1998.
[10]
DE BRA, P., L. CALVI. AHA: a Generic Adaptive Hypermedia System. 2nd Workshop on Adaptive Hypertext and Hypermedia, 1998, 1-10. (URL: http://wwwis.win.tue.nl/ah98/DeBra.html)
[11]
WIIL, U.K. (editor). Proceedings of the Third Workshop on Open Hypermedia Systems, CIT Scientific Report nr. SR-97-01, The Danish National Centre for IT Research, 1997. (URL: http://www.daimi.aau.dk/~kock/OHS-HT97/)
[12]
BEAUMONT, I. User Modeling in the Interactive Anatomy Tutoring System ANATOM-TUTOR. User Modeling and User-Adapted Interaction, 4(1), 1994, 21-45.
[13]
FISHER, G., T. MASTAGLIO, B. REEVES, and J. RIEMAN. Minimalist Explanations in Knowledge-Based Systems. 23th Annual Hawaii International Conference on System Sciences, 1990, 309-317.
[14]
BOYLE, C. and A.O. ENCARNACION. MetaDoc: An Adaptive Hypertext Reading System. User Modeling and User-Adapted Interaction, 4(1), 1994, 1-19. (Reprinted in Adaptive Hypertext and Hypermedia, Kluwer Academic Publishers, 1998, 71-89.)
[15]
KOBSA, A., D. MÜLLER and A. NILL. KN-AHS: An Adaptive Hypertext Client of the User Modeling System BGP-MS. Fourth International Conference on User Modeling, 1994, 31-36.
[16]
BROWN, P.J. Turning ideas into products: The Guide system. Proceedings of the ACM Hypertext'87 Conference, 1987, 33-40.
[17]
DE ROSIS, F., N. DE CAROLIS and S. PIZZUTILO. User Tailored Hypermedia Explanations. INTERCHI'93 Adjunct Proceedings, 1993, 169-170.
[18]
BRUSILOVSKY, P., L. PESIN. ISIS-Tutor: An intelligent learning environment for CDS/ISIS users. Proc. of the interdisciplinary workshop on complex learning in computer environments (CLCE94), 1994, 29-33. (URL: http://cs.joensuu.fi/~mtuki/www_clce.270296/Brusilov.html)
[19]
BRUSILOVSKY, P. and L. PESIN, Adaptive navigation support in educational hypermedia: An evaluation of the ISIS-Tutor. Journal of Computing and Information Technology 6 (1), 1998, 27-38.
[20]
BRUSILOVSKY, P., E. SCHWARZ and T. WEBER. A tool for developing adaptive electronic textbooks on WWW. Proceedings of the WebNet'96 Conference, 1996, 64-69.
[21]
BRUSILOVSKY, P., J. EKLUND. A Study of User Model Based Link Annotation in Educational Hypermedia. Journal of Universal Computer Science. 4(4), 429-448.
[22]
CALVI, L., P. DE BRA. Using dynamic hypertext to create multi-purpose textbooks. Proceedings of ED-MEDIA'97, 1997, 130-135.
[23]
KAY, J., R.J. KUMMERFELD. An Individualised course for the C programming language. On-line Proceedings of the Second International WWW Conference, 1994, http://www.ncsa.uiuc.edu/SDG/IT94/Proceedings/Educ/kummerfeld/kummerfeld.html.
[24]
CALVI, C., P. DE BRA. Proficiency-Adapted Information Browsing and Filtering in Hypermedia Educational Systems. User Modeling and User-Adapted Interaction 7(4), 1997, 257-277.
[25]
PILAR DA SILVA, D. Concepts and documents for adaptive educational hypermedia: a model and a prototype. 2nd Workshop on Adaptive Hypertext and Hypermedia, 1998, 33-40. (URL: http://wwwis.win.tue.nl/ah98/Pilar/Pilar.html)
[26]
BRUSILOVSKY, P., S. RITTER and E. SCHWARZ. Distributed intelligent tutoring on the Web. Proceedings of AI-ED'97, 8th World Conference on Artificial Intellicence in Education, 1997, 482-489.