Key issues surrounding virtual chat reference model

3 downloads 1266 Views 97KB Size Report
session, the rudimentary and inadequate nature of the forms used in email .... Since all six members analyzed and coded transcripts, inter-coder reliability was ...
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0090-7324.htm

REFERENCE SERVICES AND SOURCES

Key issues surrounding virtual chat reference model A case study Gang (Gary) Wan, Dennis Clark, John Fullerton, Gail Macmillan, Deva E. Reddy, Jane Stephens and Daniel Xiao Texas A&M University, College Station, Texas, USA

Virtual chat reference model

73 Received 15 May 2008 Revised 30 July 2008, 14 August 2008 Accepted 14 August 2008

Abstract Purpose – The purpose of this study is to investigate the use of co-browse in live chat, customers’ question types, referral to subject experts, and patrons’ usage patterns as experienced in the virtual reference (VR) chat reference services at Texas A&M University Libraries. Design/methodology/approach – Chat transcripts from 2005 to 2007 were sampled and analyzed by peer reviewers. Statistical data in that period were also examined. A set of methods and a pilot study were created to define the measurement components such as question types, expert handling, and co-browsing. Findings – Co-browsing is used in 38 percent of the sampled chat sessions. The Texas A&M University live chat service group considers co-browsing a useful feature. Of questions received on VR, 84 percent are reference questions. Only 8.7 percent of the total questions or 10 percent of the reference questions need to be answered by subject experts. The use of VR increases dramatically in the past two years at the Texas A&M University. The findings also reveal users’ logon patterns over weekdays and weekends. Originality/value – The study contributes and advances understanding in the role VR plays in a large academic library and the role co-browsing plays in VR services. The study also provides a comprehensive method for transcript and usage data analysis. It is believed that a similar methodology may be replicated elsewhere by other institutions engaging similar services or evaluation. Keywords Academic libraries, Reference services, Service levels Paper type Case study

Background In early 2004, the Texas A&M University Libraries, following national trends and technologies, began offering live chat reference services. The first implementation was in a collaborative venture with other Texas A&M University System universities. The pilot, funded for one year, ended in December of 2004. Though the collaborative effort was deemed unsuccessful, the College Station campus did not want to abandon the live chat concept, deciding instead to implement a single-campus alternative. In January of 2005, Texas A&M University Libraries implemented the VRLplus software from Docutek and began offering live chat virtual reference to the campus community. Faculty librarians as well as library reference staff share responsibility for staffing the service 120 person-hours per week. Overall, the VRLplus-based live chat reference service has been a resounding success. In the fall semester of 2007, a significant number of reference queries (about

Reference Services Review Vol. 37 No. 1, 2009 pp. 73-82 q Emerald Group Publishing Limited 0090-7324 DOI 10.1108/00907320910937299

RSR 37,1

74

1,750) at the Texas A&M University Libraries came through the VRLplus system. Much of the success is due to the creation of a comprehensive marketing program by librarians and a significant commitment by library administration that made virtual reference a mainstream service, according to a study by two Texas A&M University researchers (MacDonald and VanDuinkerken, 2005). However, after more than two years of using VRLplus as the live chat reference platform, the libraries had several concerns that required further investigation. Although the Texas A&M University Libraries was reasonably satisfied with VRLplus, there were unresolved concerns about the usefulness and stability of the co-browsing feature. In the context of virtual reference, co-browsing is a function that enables both the reference provider and the patron to access and navigate same web pages in a shared window at the same time. It was learned from anecdotal and personal experience that co-browsing was sometimes a frustrating experience for live chat providers and patrons alike. Some providers hesitated to co-browse for fear that it would interfere with the transaction if unexpected technical problems occurred. The authors’ literature search indicated that many academic libraries began to move away from the library-driven, fee-based software by embracing the free and lighter commercial instant messaging clients or web applications such as Yahoo messenger, AIM, or Meebo. Should Texas A&M University Libraries follow the trend to use the lighter instant messaging model or keep the current platform? The success of the live chat service and the lingering concerns about co-browsing led to the appointment of a task force to examine the issues surrounding the current live chat reference model and to make recommendations for new services if appropriate. Reference service at A&M uses a tiered service model, with the reference desk being staffed by paraprofessionals. The reference desk handles the day-to-day research and general questions. The more in-depth research questions are referred to subject specialists. Different from the desk model, the chat service uses a more traditional model, with hours being covered by both librarians and paraprofessionals. Thus, the committee was charged to determine how many questions are reference questions versus general service questions, and of the reference questions, how many require the subject expertise of a librarian (question 1). The VRLPlus software has useful features, such as co-browsing, but frequently these features do not work, causing frustration among chat participants. Given these technical difficulties, the committee was asked to determine how essential co-browsing is and how often it is used, or attempted, by the chat service provider (question 2). Finally, to optimize hours of operation and staffing, the committee was asked to analyze patterns of chat use, including days and times of high or low volume (question 3). Literature review Although reference services assessment in academic libraries has a long history, the literature on evaluating virtual reference, specifically online chat reference, is fairly recent and keeps growing significantly. Saxton and Richardson (2002) categorized measures used in the evaluation of library reference services into two perspectives: the “obtrusive user-oriented approach primarily concerned with testing for levels of user satisfaction with the service”, and the “query-oriented approach primarily concerned with testing the accuracy of answers to reference questions”.

Kern (2006) suggested that libraries should view and evaluate virtual reference as part of the whole of a library’s reference service. To integrate live chat with other components, she developed an outline for a holistic evaluation plan for a single reference service. The availability of the transcript analysis in chat reference has provided a convenient tool when performing an unobtrusive query-orientated method for evaluating chat reference services. McGraw et al. (2003) described their experience of promoting and evaluating virtual reference service at the University of North Carolina at Chapel Hill Health Sciences Library. As one of the early adopters of virtual reference, the service at UNC Chapel Hill was lightly used in the first year. McGraw et al. analyzed 82 transcripts and examined referring web pages, types of questions asked, software features used, and user login data. They also surveyed users on their perception of the service and preference of service hours. After gathering all the data, they made some promotion plans for the future. VR has undergone many changes over the past few years, but their methodologies in this research remain as a good example for the later adopters. Nilsen and Ross (2006) also discussed the evaluation of virtual reference services from the user perspective. Their study focused on the factors that make a difference to the users’ satisfaction with their virtual reference experience. They examined user accounts of virtual reference transactions, which indicated that the reference interview had almost disappeared. Among the reasons that they identified for staff failure to conduct reference interviews were a perceived need to respond quickly within the chat session, the rudimentary and inadequate nature of the forms used in email reference and the challenges in communicating without vocal and physical cues. Similarly, Kwon and Gregory (2006) analyzed and coded the chat transcripts from a large public library system, based on the “RUSA (Reference and User Services Association) guidelines for behavioral performance of reference and information services providers”. Their study focused on the influence of librarians’ behaviors in chat sessions on user satisfaction. The results of this study provided significant assistance to staff training in the chat reference service practice and identified opportunities for future adjustments of the RUSA guidelines. Moyo (2006) assessed the virtual reference services at Penn State University Libraries and their function as online instruction. She reviewed and analyzed a number of sample chat transcripts, paying particular attention to the instructional elements. A list of instructional attributes was developed and all sample transcripts were coded based on this list. Moyo’s research indicated that those questions that incorporated the highest rate of instructive elements were either instructional or research (or subject). In addition, other questions outside these two categories drew responses that incorporated instruction. This study suggested the chat reference service at Penn State was a good complement to the face-to-face reference service in terms of delivering bibliographic instruction. Pomerantz and Luo’s (2006) evaluation of virtual reference service at UNC was a good example of the user-oriented approach. The methods in their study were exit survey and collecting user data. Their analysis of users’ motivations for searching and use of information facilitates understanding of the process of reference interactions: motivation as prior to a reference session and usage as its extension.

Virtual chat reference model

75

RSR 37,1

76

Moreover, Luo (2007) proposed a framework of perspectives and measures for chat reference based on existing literature. She summarized four evaluation perspectives (economic, service process, resources, service outcomes or products) and five areas of measurements (“descriptive statistics, log and report analysis, user satisfaction, cost, and staff time expended”). This framework gave chat reference evaluators a clear idea of what is important and how it should be evaluated. Current literature is rich with different methods and approaches to analyzing virtual reference services. Given the variety of approaches, it seems apparent that no single approach can serve every institution or research need. Texas A&M Libraries, having examined many of these approaches, choose to combine several of the techniques in the literature in an attempt to answer our particular assessment needs. Methodology Information to answer questions 1 and 2 above can be obtained by analyzing chat transcripts. At that time of the assessment, the VR database contained about 5,900 session transcripts during the previous two years. Random sampling was used to select transcripts for content analysis. Based on the table for determining sample size from a given population published by Krejcie and Morgan (1970), a sample size of 360 transcripts would be appropriate to represent the entire population. A simple random sampling method (systematic random sampling) was used to obtain sample transcripts: every 15th transcript was retrieved from the 5,900 transcripts, and a total of 392 transcripts was obtained. The ID numbers of these transcripts were added to a spreadsheet and linked to the Docutek transcripts. Attributes of chat sessions were added as columns on the spreadsheet. For each session transcript these components were examined and labeled respectively: whether a question was a reference question or a general question; if a reference question, was it being referred to a subject expert; whether co-browsing was used in chat; if yes, whether any technical problems were encountered. Before analyzing sample transcripts, research questions were further refined. The question was classified as co-browsing if the chat provider attempted to configure the feature. If an error was encountered in a chat session, it was coded as either a co-browsing error or other technical error. This definition differentiated how often VR providers really intended to use this feature. VR questions were categorized into two groups: reference and non-reference. Non-reference questions included the following sub-categories: directional, technical, hours-related, and general inquiries. Defining which questions require the expertise of a subject librarian requires more interpretation. To better classify those questions, the task force reviewed chat transcripts and developed a list of criteria for determining if a question needed subject expertise. Those questions that satisfied any of those criteria were coded as requiring subject expertise (see list below). Criteria for determining if a VR question needs subject expertise . Transferred Question: the VR provider transfers (or attempts to refer) the question to a subject specialist, gives specialist’s contact information, or transfers the question to a subject focused library (such as business, medical science, etc.). Note: Texas A&M University Libraries include a main library, a business, a medical science, a special collection, and a political science library.

.

.

.

.

Patron is a researcher who is requesting recommendations on relevant library resources and sophisticated search strategy developed and/or advanced manipulation of database being searched. For example, the researcher is looking for materials data. The transcript indicates that the search was not productive or successful after searching one (if the patron left after searching only one database) or more databases. Background knowledge of the subject is required to obtain accurate results. (More specific examples are some legal search questions, questions on health information, etc.) Specialized database: questions need to search one or more specialized database such as Bielstein, SciFinder, Data Stream, Knovel advance search, International financial statistics, etc.

Before analyzing all transcripts, a pilot study using 10 of the transcripts was launched. Since all six members analyzed and coded transcripts, inter-coder reliability was an important concern. To ensure high reliability, all six members rated the same 10 transcripts and discussed their ratings before formal transcript analysis. The inter-coder reliability was around 0.8 for determining whether a reference question needed subject expertise (two members disagreed on ratings on two transcripts, the data was input to a text file and the inter-coder reliability was calculated by a free online calculator available at www.med-ed-online.org/rating/reliability.html). The inter-coder reliability for other questions was even higher. After further discussion and clarification, all members agreed on ratings for these 10 transcripts. At this point, the remaining 382 sample transcripts were divided into 6 groups, and each member worked on a group of 65 records. Moreover, when there was doubt on particular cases, the whole group discussed the cases and reached consensus. To unearth patterns of chat use, including times and days of high or low volume, (question 3) several different sets of data were examined. First, the data for current VR service hours from VRLplus administration portal were reviewed. A table showing the number of chat transcripts by hours per day provided pertinent information on the distribution of VR questions during service hours. Likewise, a table for average sessions per day showed which days in a week were busiest. Second, the statistical data for all questions were gathered including those sent through email, which captured those answered questions both inside and out the chat service hours. Oftentimes when the chat hour was closed users still showed up but then went away without leaving an email message. In order to have an idea of this type of “intended use”, web visit statistics were examined. Third, through the VR exit survey, data were collected from patrons indicating their preferred hours for library research. Data and discussion Data on co-browsing usage The VRLplus statistical reports did not indicate the frequency with which co-browsing was attempted or configured. Two related reports were those by users’ entry and exit browser modes. Information on actual co-browsing attempts was ascertained by reviewing the transcripts. When a VR provider tried to configure co-browsing, whether successful or not, the transcripts included a system-generated indication. Such

Virtual chat reference model

77

RSR 37,1

78

transcripts were coded as “co-browsing attempted”. If the co-browse attempt was successful, an additional system-generated message appeared in the transcript. The dialogue between a VR provider and a patron may also have shown if there were co-browsing configuration errors. Transcript analysis was the best method for examining the success of co-browsing in the context of research questions in this assessment. The data from sample transcripts indicated that in 150 of all the 392 (38 percent) sample chat sessions, VR providers attempted to configure co-browsing during the session. 38 of these 150 (25 percent) sessions indicated errors when co-browsing was configured. Some typical errors included that the patron’s chat window was closed immediately after the provider clicked the “configure to co-browse” button, and the co-browsing window could not load. As these statistics show, co-browsing remains a useful function in virtual reference. It can be argued that co-browsing would be used even more frequently if the feature were more dependable, since many VR providers suggested that they did not use it in an effort to avoid dropping patrons from the chat session. Given the relative use of co-browsing, it may not be appropriate to replace the current VR platform with instant messaging clients or web applications such as Yahoo! messenger, AIM, and Meebo, which have no co-browsing functionality. However, these instant messaging applications can be good supplementary tools to the current VR platform, since they do provide some enhanced features, such as file transferring and audio/video chat: features not offered through VRLPlus. Question distribution The VR Task Force coded all sample transcripts and the statistical data based on the definitions of questions types in the previous section (see Table I). The results indicate that the majority (84 percent) of questions received during chat reference service are reference questions. Only a small number of questions (8.7 percent of the total questions or 10 percent of the reference questions) require a subject specialist. These data provide good information for future decision making. One example is how to allocate staff resources to chat reference. Since most questions can be handled by a reference generalist, it is appropriate to continue the current staffing model for chat reference, i.e. staffing reference generalists for chat reference and having them refer more in-depth research questions to subject specialists. Otherwise, if the data indicate a significant percentage of in-depth research questions, it might be necessary to adjust the current staffing model accordingly. Data on trends of usage The administration portal of VRLplus provides several statistical reports that can be useful for the assessment of VR usage trends. One report gives the number of average sessions per day. See Table II for sample data at Texas A&M University Libraries.

Table I. Question distribution

Type of Questions

Non-reference questions

Reference questions

Questions need subject specialist

Number of sessions Percentage

61 16

319 84

33 8.7

The data before July 7, 2006 were not in the current database due to a system upgrade in July, 2006, but the historical statistical data were available and demonstrated a similar distribution pattern. The regular chat reference service hours as of May 30, 2007 were: Monday to Thursday 10 a.m. - 10 p.m., Friday and Saturday, 10 a.m. - 6 p.m.; and Sunday from 2 p.m. - 10 p.m. An interesting trend emerges from the data. Although there were fewer chat reference services on the weekend, with only eight hours on both Saturday and Sunday, queries remained nearly as high as on weekdays. Sunday was also the busiest weekend day, with 40 percent more questions than on Saturday. The data indicated that volume was less on Thursdays and Fridays than other weekdays. Additionally, by comparing the above table with the data from 2005 to 2006, it can be learned that the overall number of questions received had increased significantly. Based on the analysis of these data, the task force recommended both additional service hours and an additional service provider on Sunday. Hours on Saturday could be reduced, since fewer questions were received. While Table II reveals the daily usage patterns, Table III discloses the hourly usage patterns of the day. The regular chat service hours start from 10 a.m. Table III, however, demonstrates some scattering activities between the 8-10 hour range in the morning. One possible explanation is that some individual librarians use the quiet time to do VR practice. Since the number is not significant, these sessions are ignored in this study. Table III clearly shows that peak hours for chat reference are from 10 am to 4 pm. In the later evening hours, the volume of questions dropped to about half that of mid-afternoon hours. See Figure 1 for the distribution chart of chat sessions by hour of the day. When the chat service was closed, patrons were able to email their questions through the VR web site. The task force examined email question volume to help determine if chat service hours were adequate. The volume of email questions was highest during the two hours immediately before and after chat service hours, indicating a potential demand for extended chat hours if the service were active at those times. The volume of questions was not significant enough, however, to recommend expanding weekday hours. The library has conducted two online exit surveys since 2005. Questions about patrons’ preferred research hours were included in both surveys. Interestingly, nearly a quarter (24.7 percent) of respondents chose to do library-oriented research after our chat reference service had closed. Although a majority still preferred to do research between the hours of 12 p.m. and 10 p.m., when chat service was already offered, the number of respondents who desired late night research was significant enough to recommend that the library pilot extend evening hours. Finally, the web statistics of the VR web site were examined, which indicate the hourly trends of web traffic. The following figure demonstrates the total number of hits and pages visited by hour in 2007.

Number of sessions Average

Sun.

Mon.

Tues.

Wed.

Thur.

Fri.

Sat.

Total

373 7.94

811 17.26

696 14.81

743 15.81

583 12.67

415 8.83

204 4.34

3,825 n/a

Virtual chat reference model

79

Table II. Average chat sessions by day of week (July 7, 2006 to May 30, 2007)

80

Table III. Average chat sessions by hour of the day (July 7, 2006 to 30 May, 2007)

0 0%

15 0 0.05

19 0 0.06

334 9 1.02

319 8 0.97

322 8 0.98

351 9 1.07

449 12 1.37

408 11 1.24

386 10 1.18

291 8 0.89

236 6 0.72

235 6 0.72

236 6 0.72

205 5 0.63

20 1 0.06

0 0 0.00

3826 100 n/a

12 a.m.-7 a.m. 8 a.m. 9 a.m. 10 a.m. 11 a.m. 12 p.m. 1 p.m. 2 p.m. 3 p.m. 4 p.m. 5 p.m. 6 p.m. 7 p.m. 8 p.m. 9 p.m. 10 p.m. 11 p.m. Total

RSR 37,1

Figure 2 provides helpful information about the usage of this website. This figure shows very light traffic during 11 p.m. to 8 a.m. The web traffic during the chat service hours above parallels the findings in Table III and Figure 1. All these statistical data are valuable to understand the usage trend of chat reference service, and allows Reference Management to adapt the service hours to suit patrons’ needs.

Virtual chat reference model

81

Conclusion Virtual reference services, particularly chat service, is clearly an integral part of reference service at Texas A&M University Libraries based on volume. For this reason, it seems imperative that libraries not only understand the trends regarding usage and perception, but that they embrace them. Texas A&M University Libraries has sought to comprehend the role of chat service through transcripts and data analysis, precipitating crucial adjustments to staffing, hours, and platforms of the service. Understandably, there is no standardized methodology for evaluating virtual reference service at this time, since the evaluation’s purpose, user demographic, and evaluation criteria varies widely from library to library. This study, however, provides a much-needed chronicle of one library’s effort to serve patrons more effectively. Texas A&M University Libraries will put more efforts in improving their current live chat service. Some future plans include investigating new tools and new trends for web communication as well as new ways for effective assessment of their use. It is hoped that this paper will make contributions to the collaborative endeavors vigorously undergoing especially among the academic libraries community today.

Figure 1. Distribution chart of average chat sessions by hour (July 7, 2006 to May 30, 2007)

Figure 2. Web traffic of VR site by hour in 2007

RSR 37,1

82

References Kern, M. (2006), “Looking at the bigger picture: an integrated approach to evaluation of chat reference services”, Reference Librarian, Vol. 46 Nos 95/96, pp. 99-112. Krejcie, R. and Morgan, D. (1970), “Determining sample size for research activities”, Educational and Psychological Measurement, Vol. 30 No. 3, pp. 607-10. Kwon, N. and Gregory, V. (2006), “The effects of librarians’ behavioral performance on user satisfaction in chat reference services”, Reference & User Services Quarterly, Vol. 47 No. 2, pp. 137-48. Luo, L. (2007), “Chat reference evaluation: a framework of perspectives and measures”, Reference Services Review, Vol. 36 No. 1, pp. 71-85. MacDonald, K. and VanDuinkerken, W. (2005), “Distance education and virtual reference: implementing a marketing plan at Texas A&M University”, Journal of Library & Information Services in Distance Learning, Vol. 2 No. 1, pp. 29-40. McGraw, K., Heiland, J. and Harris, J.C. (2003), “Promotion and evaluation of a virtual live reference service”, Medical Reference Services Quarterly, Vol. 22 No. 2, pp. 41-53. Moyo, L. (2006), “Virtual reference services and instruction: an assessment”, Reference Librarian, Vol. 46 Nos 95/96, pp. 213-30. Nilsen, K. and Ross, C. (2006), “Evaluating virtual reference from the users’ perspective”, Reference Librarian, Vol. 46 Nos 95/96, pp. 53-79. Pomerantz, J. and Luo, L. (2006), “Motivations and uses: evaluating virtual reference service from the users’ perspective”, Library & Information Science Research, Vol. 28 No. 3, pp. 350-73. Saxton, M.L. and Richardson, J.V. Jr. (2002), Understanding Reference Transactions: Transforming an Art into a Science, Academic Press, Amsterdam, NY. Corresponding author Gang (Gary) Wan can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints