posted on 2024-07-11, 19:02authored byGary Hardy, Dana McKay
Materials availability surveys are a common part of library benchmarking, particularly in Australia, where the CAUL Materials Availability Survey is an industry standard. To meet our comparative benchmarking obligations, and to benchmark ourselves prior to the introduction of a new library system Swinburne ran a benchmarking survey in May 2009. Because materials availability surveys are relatively common, we were able to learn from the experiences of other libraries. Our methodology was considerably influenced by other libraries' experiences: we conducted an online-only version of the CAUL survey, and we attempted to pre-empt the positive bias we have seen in the results of other surveys by making it equally as difficult to report success as failure. Despite our best efforts to gain a realistic picture of information activities within the library, we found that the majority of survey respondents reported on their experiences searching for books, and that the majority of them claimed to be successful: this is in stark contrast to our usage statistics which suggest online journal articles are used considerably more than books, and our library surveys which tell us our patrons are less happy with our book collections than many other resources. Closer examination of our results, however, tells us that the results are not as positive as they first appear, and demonstrates a number of sticking points in the information seeking process. In this paper we will discuss our methodology in more depth, particularly the reasons for running our survey online. We will examine our results more closely, and reflect on where in the search process failure occurs for both online and offline resources. Finally, we will examine the ways in which we might improve our users' experience of searching for resources, even where they did manage to find those resources.