Canadian Heritage - Patrimoine canadien Canada
 
Français Contact Us Help Search Canada Site
Home Site Map
Canadian
Heritage
 News
 Job Postings
 Conferences
 and Training

 Directories
 Funding
 Order Publications
 Add Information

Creating and Managing Digital Content Creating and Managing Digital Content

Research on 'Quality' in Online Experiences for Museum Users


Evaluating 'Quality' in Online Museum Web Experiences


How relevant is The Engagement Factor (EF) statistic for measuring quality of online experiences for museum users?

What value does the Engagement Factor have as a measure of success for VMC products, or is there a better alternative?

Factors to take into account in using Engagement Factor as a success indicator:


Caching:

Large ISPs (e.g., AOL, Sympatico) cache a lot of content, meaning their users do not have to go directly to the VMC Web site.

IP addresses:

New IP numbers are given to users each time they visit a Web site, therefore repeat Visitors are difficult to track.

Visit and Visitor issues:

    Hits: Search engine crawlers, spiders, and robots distort results by registering as 'hits.' A hit is registered both for the HTML page itself as well as for each image the page contains (i.e. one HTML page with five images will register as six hits).
    Page Views: To factor out the problem with pages that contain separate images, Page Views are determined but are also distorted by the spiders, crawlers, and robots.
    Visits or User Sessions are generated by requests from distinct IP addresses. The number of Visits or User Sessions may include both repeat and first-time users.

Duration of visit:

The Duration statistic is highly variable. The Duration of visit gives neither a sense of the range of times, nor whether users are looking at the site for the whole duration. Duration in reality records how long a user is logged on more than the amount of time someone spends using the site. Modem speed might also affect this statistic.

Other comparisons and questions that could be explored:

  • Is there a measurable inverse relationship between Visitors and length of Visit, and between spring / fall and summer?
  • Is there a correlation between publicity levels (unique referrals from other sites) to Visits?
  • Is there an expectation that there be a low Visit / Visitor level and longer Duration level for a higher Engagement Factor?
  • How can more repeat Visits by discreet users be measured?

Limitations of using Log files as a measurement tool:


Because of the way the Web operates, in particular the process of caching, logs do not reliably count the total number of page requests or user sessions (Peacock, 2002 p. 5). Log files consistently underreport repeat visitors and users from the most popular ISPs (because of caching). Log files may inflate the number of unique visitors to a site, as the same user may be logged with multiple IP addresses during a single session, suggesting that the sample users recorded in the logs are more likely to be first time visitors to the site. (http://www.archimuse.com/mw2002/abstracts/prg_165000775.html)

How online traffic can be a measure of success:


Log files are helpful in determining which paths users follow when navigating through Web sites and which content, pages and sections of the site are most interesting and engaging to visitors. For example, the Ontario Science Centre (OSC) Web development team tracks usage within their Web site, seeing where users are going online, and looking at what they are accessing (Soren & Lemelin, 2004). On-line traffic can be a measure of success particularly if Web developers can draw links between that traffic and visits to their physical institution (e.g., users accessing the OSC on-line calendar may be using the calendar to help plan their visit).

Advantages of using Log files for statistical analysis:


Despite having its limitations in determining unique and repeat visitors, log files produce quantitative data that can be subjected to statistical analysis. The data produced are a record of actual user behaviour rather than reported or assumed activity. Log data are recorded free of observer or questioner bias, and the data samples are large and can be tracked over time.

Visitors know what they are looking for:


A report entitled Tracking the Virtual Visitor (Johnson, 2000) noted that The National Gallery of Art in Washington has refined, and in some cases dramatically reshaped, the architecture of their Web site's interface to accomplish the following three goals:

  1. The Web-going public should be able to easily locate the NGA Web site and its on-line resources.
  2. Once they have arrived, Web visitors should be able to quickly find what they are looking for.
  3. When visitors leave the site, they should want to come back again.

A quick look at the Web log statistics on the search words and phrases used to query the NGA site demonstrated that many visitors have a clear idea about what they are looking for. The vast majority used very specific words and phrases, usually artist names and titles of works of art, rather than general terms to define their search for visual arts-related online materials.

Other Approaches to Evaluating User Experiences

It is important that Web development teams involve their target online users in developing personally meaningful quality online experiences in which individuals can construct meanings in multiple ways. Performing user and usability testing, and providing opportunities for visitors to experience exhibitions both on-site and online seem most likely to promote high quality, engaging experiences (e.g., Harms & Schweibenz, 2000, http://www.archimuse.com/mw2001/papers/schweibenz/schweibenz.html). Furthermore these approaches will demonstrate a Web site's 'exchange' function of establishing and creating a network or a forum among users, or between museum experts and users.3

Wertsch (2002) contends that one major problem that continues to be challenging for museum professionals is evaluation of a museum's impact on its visitors. Although Wertsch is discussing visits to physical museums, his comments seem equally as relevant for evaluating quality in online experiences on a museum's Web site:

    One encounters many complexities when considering this issue, but none is more unsettling than a basic quandary that underlies the whole project. On the one hand, we are called upon to assess the impact that museums have on visitors. Increasing claims that museums play an important educational role have served to up the ante on this issue dramatically. On the other hand, precisely what it is that we should be evaluating remains unclear. Should visitors be acquiring new information? Should they be developing new areas of curiosity? Should visitors be engaging in some sort of identity project? Or is there something else they should take away from a museum visit? (p. 113)

Usability Index

One approach to analyzing how users feel about a Web experience is a Usability Index, which is a "measure, expressed as a per cent, of how closely the features of a Web site match generally accepted usability guidelines" (Keevil, 1998, p. 271). The Usability Index consists of five categories:

  • Finding the information: Can you find the information you want?
  • Understanding the information: After you find the information, can you understand it?
  • Supporting user tasks: Does the information help you perform a task?
  • Evaluating the technical accuracy: Is the technical information complete?
  • Presenting the information: Does the information look like a quality product?

The big advantage of the Usability Index (or 'heuristics') for Web communication is that by contrasting these established usability principles with the Web site under evaluation, the evaluator or information designer can decide if usability problems exist, what kind they are, and how they can be removed. The disadvantage of the heuristics is that they are very detailed and complicated. (http://www3.sympatico.ca/bkeevil/sigdoc98/index.html)

Using Median Dwell Time instead of Averages

Median dwell time measures how long people are on a Web site, measuring both the users who stayed longer on the site and those who stayed shorter evenly. Measuring 'median dwell time' is often much more useful than determining average time online. Averages tend to be distorted by Web site visitors who either quickly come in or leave a Web site, or arrive onto a Web site and view every page.


3For an example of front-end, formative, and summative user testing during the development of an award-winning VMC Virtual Exhibit, Cloth and Clay: Communicating Culture, see: Shaughnessy, Dalrymple, & Soren (2004) http://www.archimuse.com/mw2004/abstracts/prg_250000759.html; Soren (2004a) http://www.informalscience.org/download/case_studies/report_68.pdf.

 

Previous Page       Table of Contents       Next Page


Virtual Museum of Canada (VMC) Logo Date Published: 2004-09-30
Last Modified: 2006-06-16
Top of Page © CHIN 2006. All Rights Reserved
Important Notices