Assessing Digital Library Services

Mel Collier (Head, Bibliotheek, Katholieke Universiteit Brabant, Tilburg, The Netherlands)

Program: electronic library and information systems

ISSN: 0033-0337

Article publication date: 1 March 2002

589

Keywords

Citation

Collier, M. (2002), "Assessing Digital Library Services", Program: electronic library and information systems, Vol. 36 No. 1, pp. 64-66. https://doi.org/10.1108/prog.2002.36.1.64.15

Publisher

:

Emerald Group Publishing Limited

Copyright © 2002, MCB UP Limited


Introduction

It has become commonplace to observe that the explosion of digital library (DL) research and development over the last decade has been technology‐led, with aspects such as human, economic and business taking second place. This is quite natural and reasonable: for a technology to move from concept to practicality you have to show what in theory the technology can do in order to engage with people to bring it to reality. However, given the immense number of projects now completed and under way around the world, and the incalculable resources now being invested in digital libraries, it would be reasonable to expect that the phase of mature reflection and consolidation should now have been entered. Evaluation methodologies for digital libraries are clearly one important component of that maturation process. An authoritative special issue from Library Trends should therefore be a timely and important contribution to the literature.

This issue is a compilation of essays by US‐based authors demonstrating a variety of approaches, some grounded in quite concrete operational or controlled environments, others apparently less so. The editorial starts on two warning notes: first, is the DL field sufficiently mature to be amenable to assessment? Second, what would be the evaluation models? It is as if the editor senses that the volume could struggle to take a coherent view. He would be right: it struggles.

A number of studies have shown that it is even not clear what is meant by digital libraries. Brophy (1999) observed that the perspectives of computer scientists and library/information scientists are so different that it is not clear whether they are talking about the same thing at all. When there is no general agreement on what the definition of a digital library is, there is an immediate difficulty for a work on evaluation.

The essays

Taking the contributions in turn, we are first presented with a summary by the editor, Thomas A. Peters, of the arguments of the following essays. This is a précis rather than a critical essay on the contributions.

Borgman et al. describe the ADEPT project, a digital library of geo‐referenced information resources aimed at undergraduate teaching of geography at the University of California. It identifies five core skills to be developed and tested as learning outcomes for the undergraduates. Throughout the project there has been a focus on evaluation which has had to be developed very much as it went along owing to the lack of coherent previous work. This project must be one of first providing serious results not only on the usefulness of a digital library, but more importantly on its interaction with the learning environment. The methodologies used described in the essay appear sound and could provide models for testing in other domains. The contribution is thought‐provoking, soundly argued and valuable.

David S. Carter and Joseph Janes present a wholly different paper on the analysis of digital reference questions posed to the Internet Public Library (IPL). This is a fascinating insight into the way the IPL works and into the nature of the enquiries. Anyone who has heard Janes speak about the IPL will know that it is an important and interesting project that has developed out of a student project into a self‐sustaining life of its own. I have a difficulty with this essay in the context of this compilation, however. The IPL reference queries are essentially e‐mail enquiries posed to the IPL organisation, which are then answered by people. Although the role of humans in the digital library is important, is the evaluation described here really evaluation of a digital library?

The next paper, by Paul Gorman et al., is based on a National Science Digital Library Initiative Phase 2 project; it tells us that experts create and use “bundles” of information to build awareness and solve problems. It is set in the context of medical information. It furnishes us with descriptions and even photographs of collections (bundles) of written medical records, a kardex, worksheets, flow sheets, manuscript notes. It reveals that some of these can be rather disorganised (messy bundles). Out of 23 pages not until the last few does the essay focus on digital libraries and then fails to link its ponderous analysis to evaluation.

Daniel Greenstein then provides a broad overview of developments in digital libraries and their challenges. Having been involved in major initiatives in the UK and the USA, he is very well placed to do so. The challenge for digital libraries he mentions are standards, best practice, collection development, penetrating user communities and long‐term access to digital information. As such it is a rather high level review or collection of observations. Because of its author it could be interesting but, for the reader seriously interested in evaluation, rather general.

Gary Marchionini provides an interesting paper on a “longitudinal and multi‐faceted view of evaluating digital libraries”. It is based on work relating to the Perseus Digital Library (PDL), a long‐running project (since 1987) providing digital resources in the humanities, particularly classical antiquity. Like ADEPT, this project seems to have taken a systematic approach to evaluation throughout its life. ADEPT is now in a Digital Library Initiative Phase 2 and longitudinal assessment continues. The essay provides some useful suggestions for success factors in digital libraries. Evaluation methodologies have evolved from the early stages when the project was based on CD‐ROMs to up‐to‐the‐minute approaches to analysis of present Web‐based services. This is an important and interesting contribution.

Thomas A. Peters, the editor of the compilation, then writes about “effective meta‐assessment of online reference services”. He defines meta‐assessment as “deliberate examination of the elements, basic conditions, and needs of a thing (service, event, system and so on) that transcend particular instantiations of that thing”. He further explains that meta‐assessment occupies the middle ground between philosophy and assessment. The essay is written in the context of online reference services, which in some cases raises the difficulty mentioned above, of the relevance of humans answering humans as a valid aspect of digital library evaluation. The essay seems to be arguing that for effective evaluation there need to be higher level criteria and understanding of generic needs on which to base the evaluation of a particular system or service. There is little to disagree with there. Whether this thesis merits the coinage of the term meta‐assessment is debatable.

Tefko Saracevic then presents an essay entitled “Digital library evaluation: toward an evolution of concepts”. This is a well‐constructed and concrete overview of the current state‐of‐the‐art of digital library services and how evaluation should be approached. It moves logically through the need for evaluation, suggests definitions, contexts and criteria. Uniquely among these authors he seems to be aware of what is going on outside the USA. This is a valuable contribution and would have been better placed as the opening scene‐setting essay in this book.

The final paper by Michael Seadle is entitled “Project ethnography: an anthropological approach to assessing digital library services”. It is about understanding the cultures of different groups and organisations that have a stake in a project. It describes the interests and aspirations of nine groups of players in the National Gallery of the Spoken Word, an NSF Digital Library Initiative Phase 2 project. The paper is interesting, putting the perspective of cultural aspiration on the agenda for assessment of digital libraries. It becomes side‐tracked, however, into issues that are more about the wants of researchers and funders, and the project as a process, than about what it is supposed to produce – the digital library. Project evaluation and digital library evaluation are not the same thing.

Conclusion

This compilation contains three valuable essays that are tightly focused on the title of the book: by Borgman et al., Marchionini, and Saracevic, respectively. The other five range from quite interesting background and periphery to the curious. There are some padding and muddled thinking in places. The compilation would have benefited from much more rigorous editing and selection of material. The compilation is seriously flawed by not containing any representative work from outside the USA, for instance, Europe, Japan or Australasia, nor, apart from Saracevic, any reference to it. Whilst evaluation methodology may or may not be any more advanced there, both the UK’s Electronic Libraries (eLib) programmes (http://www.ukoln.ac.uk/services/elib/papers/tavistock/supporting/#tavistock) and the European Union’s telematics programmes (http://www.cordis.lu/libraries/en/studies.html) have quite stringent requirements for evaluation at programme and project level. Although it will be bought for completeness by those specialising in digital library evaluation, it does not really live up to its promise.

Notes by the Reviews Editor

(This section contains notes on works (e.g. directories, short publications and selected product guides) for which a full review is considered to be unsuitable. This is not to suggest, however, that such works are regarded in any way as ephemeral or unimportant.)

Reference

Brophy, P. (1999), Digital Library Research Review: Final Report, Library & Information Commission, London, available at www.lic.gov.uk (See also Brophy’s work on performance measurement in digital libraries for the eLib and EU programmes).

Related articles