First page Back Continue Last page Overview Graphics
The ARIA Photo Agent (video)
Adaptive Linking between Text and Photos
- Text is used for searching as it is typed
- Text is matched with photo descriptions
- keywords, people, place and time
- Database with “common sense” used
- Adaptive sorting (of photos = search results)
- Automatic annotation of selected photos
- Annotation (conceptual descriptions) of photos can be manually updated
- Project webpage: http://web.media.mit.edu/~lieber/Lieberary/Aria/Aria-Intro.html
Notes:
The ARIA Photo Agent is a good example of a system that performs adaptation without any predefined rules or concept relationships. The construction of the rules is basically part of the adaptation process itself.
ARIA uses a combination of interesting techniques, and its power lays in this combination:
First of all, ARIA works as some kind of search engine. As the user types a story that story is interpreted as a search query. In particular, the most recently added words or the words where the user is editing are used most for searching.
The search retrieves photos based on the annotations of the photos. There is adaptive sorting of the photos.
The search greatly benefits from a logical inference system and a database with common sense information. In the paper and video demonstration we see a story about Ken and Mary’s wedding. Common sense indicates that Mary is the bride and Ken is the groom. There is a lot of inference using common sense to deduce a lot of information about this wedding.
Photos are also selected based on place and time. Recent photos have a high chance of being relevant because the story is likely about a recent event.
When a photo is selected, the context in which it is placed is used to update the annotation of the photo.
The Aria Photo Agent is nicely illustrated by a video made by the author, Hugo Liu, from MIT.