How is the LibQUAL+® survey constructed and conducted?
The LibQUAL+® survey evolved from a conceptual model based on the SERVQUAL instrument, a popular tool for assessing service quality in the private sector. The Texas A&M University (TAMU) Libraries and other libraries used modified SERVQUAL instruments for several years. ARL, representing the largest research libraries in North America, partnered with TAMU to develop, test, and refine a newly adapted tool to serve the particular requirements of libraries: LibQUAL+®. After years of revision based on data collected from thousands of library users, the LibQUAL+® survey has evolved into a protocol consisting of “22 items and a box.”
The 22 core survey items measure user perceptions of service quality in three dimensions: Affect of Service, Information Control, and Library as Place. For each item, users indicate their minimum service level, desired service level, and perceived service performance. The survey contains additional items that address information literacy outcomes, library use, and general satisfaction. An open-ended comments box provides a wealth of information for qualitative analysis. Participants also have the option to select five additional local questions to add to their survey.
In 2008, the ARL/Texas A&M research and development team tested an alternative form of the conventional LibQUAL+® survey, called "LibQUAL+® Lite." The Lite protocol uses item sampling methods to (a) gather data on all 22 LibQUAL+® core items, while (b) only requiring given individual users to respond to a subset of the 22 core questions. The mechanics of this item sampling strategy, and some results from the spring 2008 pilot testing of the "LibQUAL+® Lite" protocol, have been described in two recent articles, which you can access on our LibQUAL+® Lite info
page. The recent 2009
dissertation by Martha Kyrillidou adds comprehensive data analysis
and literature review related to the new Lite protocol.
Conducting the LibQUAL+® survey requires little technical expertise
on your part. You invite your users to take the survey, distributing the
URL for your library's Web form via e-mail, on your Web site, or
another method. Respondents complete the survey form and their answers
are sent to a central database. The data are analyzed and presented to
you in reports describing your users' minimum, desired, and perceived
expectations of service.
For sample screen shots of the survey, see Sample Screens
from the LibQUAL+® Survey.
When is the best time to launch the survey?
Past experience has proven that Monday or Tuesday morning is the best time of the week to send out your survey announcements and your follow-up messages. Libraries should look for a quiet period during the semester to launch the survey locally. We recommend at least a three-week survey run.
Whom should we contact if we experience technical problems with the web survey form?
If you experience technical problems with the survey form, you should first contact your local IT personnel. If the problem cannot be resolved locally, send an e-mail message to firstname.lastname@example.org, providing the following information about your computer system: platform (e.g. Windows XP, Mac OS X), browser type and version (e.g. Firefox 3.6.2, IE 8.0), and a brief description of the problem, including what was being done when the problem occurred and the page on which the problem was experienced.
How should we handle paper surveys?
Each paper survey should be assigned a consecutive number before being given to respondents. Paper survey data should be entered EXACTLY as it appears on the page. There is a specific place in the system for submitting data from paper surveys you will see during administration. Data from paper surveys must be entered online before your institution’s survey closes. After the closing date, no additional data can be added. Libraries should keep a tally of the number of forms entered by hand.
Is the preview survey data made available to survey participants?
Preview data is not available to participants.
How can the survey results be confidential when respondents are asked to provide their e-mail address for the incentive prize drawings?
Although some information is captured from respondents, the respondent’s privacy is protected in several ways. First, network addresses are captured. However, this is only very indirect information and it would be difficult to trace back to an individual. Second, sometimes e-mail addresses are captured, but only at the behest of the participant. Extreme measures are taken to separate identifiable e-mail addresses from surveys with responses. Once they are collected, there is no way to link them to an individual’s responses, ensuring confidentiality for participants in the incentive drawings.
When will we receive a list of our local incentive prize winners?
As soon as you close your institution’s survey, the de-duplicated, randomly generated list of local prize winners will be automatically compiled. You can view the list of your local winners online via the Survey Management Center.
Who will notify the local incentive prize winners?
Each participating library will be responsible for publicizing the incentive drawings and notifying their local prize winners.
How can I get more information about LibQUAL+®?
For more information, browse around the site or send an e-mail to email@example.com.