By Tessa Gibson


 

You’ve just wrapped up a fanfiction writing workshop for tweens at your public library, and you are feeling great about how the program went. You planned the program with specific learning goals in mind, used multiple instructional approaches to differentiate the content for diverse learners, and collaborated with a local adult fanfiction club to bring in mentor authors to work with participants. You know that 16 tweens attended, and you know that they left with smiles on their faces—but do you know what they learned?

The rise of instruction as a central service in public libraries raises questions about best practices and standards for assessing learning in this space. While there is a wealth of supporting literature on programming in public libraries, very little research examines the importance of learning outcomes and assessment. David Carr, a contributor to the National Impact of Library Public Programs Assessment, stated that “programming is effective to the degree it serves the authentic needs and interests of its target participants” (American Library Association, 2014, p. 17). Assessment helps librarians determine if their instruction, as designed, is aligned with their library’s mission and whether it is effectively meeting the needs or interests of the community. Regular assessment also informs how instruction is planned and framed within the context of learners’ existing skills and the cultural factors or norms that may influence learning outcomes. The lack of literature on learning outcomes and assessment in public libraries is, therefore, surprising, since there are evident benefits to both the library and its patrons.

However, examining assessment in a public library setting is not a simple task. Instruction in this space is typically designed to facilitate informal learning, a drastic contrast to the more structured lessons taught in school or academic libraries. Furthermore, it can be difficult to measure intangible outcomes that distinguish learning in the public library, such as emotional or developmental gains (Institute of Museum and Library Services, 2000). This chapter will discuss what literature can be found on the subject and will extrapolate from tried-and-true assessment strategies used in school or academic libraries to suggest a way forward for assessing learning in the public library.

Assessment in Public Libraries

Assessment strategies are already present in most libraries to some extent. Statistics on circulating materials, interlibrary loans, and program attendance are small examples of how libraries use assessment strategies to evaluate their services. These are useful measures, but without additional information, they say very little about the impact the library has on its patrons and the community. Assessment in the public library should serve as a catalyst for a greater understanding of the library’s influence on patron learning and growth (Kirk, 1999). This information and its supporting documentation can serve the library in a number of ways.

According to the Institute of Museum and Library Services (2000), federal agencies and the public increasingly value accountability measures. Grants and other funding opportunities now require evidence of program results and patron satisfaction. Assessment measures produce tangible data that libraries can use to maintain the support of federal or public institutions and solicit external funding. The purpose of assessment in public libraries goes well beyond accountability, however. Assessing program designs, learning outcomes, and learner experience is central to informed decisions about revisions to management, policies or procedures, and mission statements. This knowledge helps libraries effectively respond to changes within the community and continuously meet patrons’ needs (American Library Association, 2014). Communities engage with the library’s programs and services in unique ways. Becoming familiar with the community’s diverse needs and experiences guides the library’s progress.

Integrating assessment as a regular practice also enables the public library to connect its services to educational gains in the academic system. Connected learning opportunities can positively affect participants’ academic performance on learning goals and standardized tests (Urban Libraries Council, 2016). Accordingly, it can help schools close the achievement gap by giving children in need greater access to educational resources and opportunities outside of school hours (Metropolitan Group & Urban Libraries Council, 2015). Assessment is a significant point of engagement with the school curriculum and opens the public library to partnerships with other organizations.

Additionally, a strong collaborative relationship with schools and other academic institutions may challenge some of the disadvantages that low-income and marginalized children encounter during their education. Black, Hispanic, and low-income children are twice as likely to drop out of high school as their peers, due to barriers that hinder learning from elementary school onward (Council of the Great City Schools, Institute of Museum and Library Services, & Urban Libraries Council, 2016). Libraries are uniquely positioned to help children and their families confront these obstacles. For instance, bookmobiles can meet low-income children without transportation where they are, and children or families with limited language skills can be accommodated by the flexibility of programming or variety of material formats the library offers (Council of the Great City Schools & Urban Libraries Council, 2016). Assessment strategies that evaluate program efficacy and introduce the library to new partnerships consequently facilitate the empowerment of diverse patrons as learners.

Planning Assessments

Assessment strategies require a substantial amount of planning before being effectively implemented alongside a program. In fact, one of the barriers to proper assessment in public libraries is the amount of time and staff training that accompanies it. There are two simple details to consider when planning assessments to ensure their efficacy: the program design and intended learning outcomes.

The type of assessment depends heavily on the structure and design of the program. This will be discussed in depth later in the chapter, but, suffice to say, certain factors will affect which assessment strategy is appropriate and effective. The social or cultural characteristics of the audience are especially relevant. For instance, a reading program designed for children and families with limited English skills will require a different assessment strategy than a program developed for white young adults in a rural community (Council of the Great City Schools & Urban Libraries Council, 2016). The structure of the program will also influence assessment strategies. In order to carry out a longitudinal design or an assessment strategy over multiple time scales, the program will need to be sustainable. Assessment is only effective when it fits the program it accompanies.

Clearly defined learning outcomes are necessary when choosing an assessment strategy. Librarians must know what they want to assess before they design an approach —an impossibility without established learning outcomes for the program. Assessment strategies can evaluate a number of learning gains, from content knowledge to developmental outcomes. However, no assessment strategy or program design is perfect. There will be unexpected outcomes that librarians can’t anticipate (Blummer, 2007). Some of these have practical value to the learner, while others have a deep-level impact on social or emotional growth. Furthermore, learning outcomes can easily become a method of control and accountability that restricts patrons’ agency while learning and reinforces the opportunity gap (Gregory & Higgins, 2017). Remaining mindful of this during assessment and using learning outcomes to develop instruction within your library can help you accommodate unexpected outcomes that enrich your program and make the public library unique.

Outcome-Based Evaluation

It is difficult to organize public library assessment strategies into distinct categories due to the literature gap on the subject. However, there is some precedent in both research and practice within more formal learning environments that examines tried-and-true techniques as well as more innovative approaches. When used to evaluate learning gains from library programming, these can be collectively referred to as outcome-based evaluation, with “outcome” defined as any benefit to participants. Programs that are created to benefit patrons have inherent learning goals that can be articulated and assessed. Outcome-based evaluation involves determining what evidence will illustrate these desired changes. (Birnbaum, n.d.). Outcome-based evaluations can be either quantitative or qualitative. We will explore each approach in more depth below.

Quantitative Assessment Strategies

Quantitative measures produce numerical results or data that hopefully speak to the impact of library instruction on individuals or groups of learners. Librarians are usually more comfortable with quantitative assessment strategies because they don’t require direct participation or consent from patrons, meaning they are less time-intensive and perhaps more familiar to the librarian (Farrell & Mastel, 2016). Quantitative metrics demand less training to collect and interpret, are typically easy to communicate, and are often preferred by library administrators who are tasked with justifying the cost of library resources, staff, and services. Quantitative data can be collected related to several aspects of instruction.

Use of Space and Resources

Measures that speak to the physical use of the library space and resources are often immediately observable for librarians, digitally or physically. Traffic through the library’s spaces is perhaps the best example of this data. Foot traffic can be observed from the desk, the door, or with visible marks throughout the space. Most websites and social media platforms offer statistics on traffic through digital spaces, as well. Inquiries via the library’s digital reference service, if available, and at the reference desk are also useful measures of library use (Urban Libraries Council, 2016). This data is representative of how people move through the library’s spaces and can consequently inform program design and learning outcomes.

Engagement

Quantitative metrics can also show surface-level measurements of engagement with library instruction. Barbara Blummer (2007) suggested that the amount of time spent on libguides or online tutorials and the number of mouse clicks can illustrate how patrons engage with these online services. Attendance at programs or events and the circulation of targeted materials are simple assessments of patron interest in programs as well as further learning. This data is easy to collect during and after the event.

Individual Growth

Some librarians are taking an innovative approach to generating numerical data that represents learning gains by adapting school-based approaches to the public library. Karen Diller and Sue Phelps (2008) advocated for electronic portfolios, arguing that they are authentic portrayals of student learning and reflection, so long as they center on learning outcomes. Assessment rubrics have also been introduced to the public library, particularly in summer reading programs, to give librarians a better sense of patrons’ level of learning. These forms of assessment complement longitudinal methods of study and are time-intensive.

Public librarians have also taken cues from gaming culture, introducing loyalty programs and achievement-based programs as forms of assessment. Virtual badges, for example, delineate the progress learners are making toward learning goals and competencies with digital milestones (Urban Libraries Council, 2016). This gamification of learning is intended to leave patrons with a sense of achievement and increased motivation. Librarians can use these milestones to evaluate patrons’ skill levels before and after relevant programming.

Shared Data

The popular adage, “it takes a village to raise a child” aptly summarizes the importance of shared data, which is imperative when collaborating with schools, city or state organizations, and other public institutions (Urban Libraries Council, 2015). Librarians are in a unique position to facilitate a coordinated approach to educational gains. Their flexibility allows them to design programs to meet patrons’ diverse learning needs and to engage disinterested or discouraged students. However, public libraries cannot identify patrons in need of assistance or support without the cooperation of local schools or city organizations (Council of the Great City Schools & Urban Libraries Council, 2016). Sharing data can help all parties of interests assess the efficacy of programming by correlating participation, engagement, and other metrics from the public library with academic metrics such as reading gains and standardized test scores, and county or city demographic data.

Qualitative Assessment Strategies

Qualitative measures generate non-numerical information that, in this context, expresses learners’ experience of the library and its services. Because this data tends to be narrative or anecdotal, it may require more time and staff training; even so, qualitative assessment strategies can capture learning outcomes that are missed by quantitative measures. Shannon Farrell and Kristen Mastel (2016) suggested that many librarians are unfamiliar with these methods and are consequently uncomfortable implementing them in a public library setting. However, the push toward assessment and formalized learning goals has inspired many librarians to bring qualitative measures to the public forum.

Internal Assessment

The foremost measure of effective learning goals should be the library’s mission statement. Programs in both academic and public libraries are most effective when they support the organizational goals of the library (Farrell & Mastel, 2016). Coworkers can provide helpful feedback about the alignment of learning outcomes with the purpose of the program and the library’s mission statement.

Feedback can also be solicited during regularly scheduled staff meetings, which provide a convenient forum for group assessments. Staff members can discuss emerging best practices, evaluate recent programs, and determine how assessment results will influence future programming or events (Urban Libraries Council, 2016). This offers a sounding board with consistent and diverse feedback that enables individuals, and the team, to modify their approach to education and assessment in the library.

Reflective practice is a means of internal assessment that is, unfortunately, frequently overlooked in practice and literature (Farrell & Mastel, 2016). Asking what went well or what didn’t work, in your personal experience of the program, can help immediately identify points of improvement and guide future modifications to the program or assessment design. Self-reflection is most beneficial when used in conjunction with other assessment strategies.

Surveys

Surveys are a traditional form of qualitative assessment (they can also provide quantitative data, depending on the questions included). They give learners the opportunity to report on the program and their learning gains from a personal point of view (Urban Libraries Council, 2016). Learner feedback can suggest how the program or its assessment should be modified, or what worked well enough to be carried forward. Surveys can be solicited face to face during or after a program, in focus groups, with a postcard or written form after a program, or through an online submission point (Blummer, 2007). These methods have been successfully implemented in libraries and other public institutions for decades.

There are more innovative approaches to the traditional survey that may influence patron participation. The survey can be introduced to the digital age with more than an online form (Farrell & Mastel, 2016). Comments on social media platforms in response to a prompt from the library could be considered a form of survey data. Short recordings, such as “stories” or “vox pops,” can also be very impactful. Since these methods are less anonymous, it’s important that the patron is informed and consents to participate and have their thoughts shared. Surveys give participants the chance to directly express their thoughts and feelings in a functional format.

Observations

Observations are straightforward and self-defined, but they do require awareness before, after, and during the program. Documenting how patrons engage with the program through written notes or photographs is a practical examination of the program’s success (Farrell & Mastel, 2016; Urban Libraries Council, 2016). It is also convenient, since it only requires that the librarian be present and equally engaged.

To get the fullest possible understanding of the impacts of your instruction on learners, you will likely want to utilize some combination of quantitative and qualitative assessment. For an example of a public library that effectively combined multiple forms of assessment, read about the Plano (Texas) Public Library below.

 

Spotlight: Project Outcome at the Plano Public Library

The Plano Public Library in Plano, TX, serves a large and diverse community. Because their mission statement emphasizes outreach to children, teenagers, and their families, Plano Public Library designed three new programs to meet the perceived needs of these patrons: a storytime and an art program for children, as well as a technology program for teenagers (ORS Impact, 2017). The library felt that the programs were going well but wanted a more concrete assessment of patrons’ reception to these services. None of the staff members had training or experience with assessment, so the Plano Public Library reached out to Project Outcome for support.

Project Outcome is a free toolkit that provides resources, training, and assessment tools designed to help public libraries determine, “What good did we do?” It was created by the Performance Measurement Task Force of the Public Library Association as a three-year project attempting to standardize best practices for assessment in the public library (Public Library Association, 2017). They help measure patron knowledge, confidence, application, and awareness in seven key library service areas. The toolkit and further information can be found at www.projectoutcome.org.

In addition to their statistics on circulation and attendance, the Plano Public Library conducted two surveys that solicited thoughts and comments from parents. They discovered that while parents and their young children enjoyed the programs, teenagers felt unsatisfied with their options. Additionally, there were patrons with unique needs who couldn’t fully participate. The library staff modified the programs to include a sensory storytime designed for children over-stimulated in large groups, a Storytime Around the World featuring children’s books in different languages, a collection of new technology kits for teenagers, and an arts program open to children of all ages.

The response from the community astonished library staff. When they performed the same assessments a month later, it revealed a community that felt heard. Circulation of foreign language materials dramatically increased, as did attendance at all programs, but the most informative feedback came from the patrons’ testimonials. Parents and their children were comforted by the diverse services and materials, reporting that they felt included and accepted in the public space. Parents appreciated how the technology kits fostered collaboration and positive interactions with their teenagers. The art program helped teenagers and children create projects they didn’t have the means to create at home or at school. The assessment strategies equipped the Plano Public Library with the insight they needed to respond to their community’s needs and created a diverse space that opened opportunities for its learners.

 

Barriers to Assessment

Despite advocacy on behalf of assessment in public libraries, there are still challenges that create barriers to entry for librarians interested in the practice. Assessment strategies require staff training, to some extent, and a substantial time investment to be effective. Many strategies can be implemented immediately before, during, or after a program. These strategies are often preferred and commonly used by librarians because they provide immediate feedback on what learning may have occurred (Farrell & Mastel, 2016). However, change is not always immediately observable. The most accurate measure of patron learning can best be determined over time with multiple time scales or longitudinal designs (Lemke, Lecusay, Cole, & Michalchik, 2015). This is especially true for public libraries and other social institutions, whose work cannot always be benchmarked against formal learning institutions for comparison (Institute of Museum and Library Services, 2000). Long-term benefits and positive changes to learners in the community must, therefore, be periodically examined over time. Many librarians can’t design and implement scaled assessment strategies because they simply don’t have the time or training.

Some librarians are daunted by the feat of designing an assessment strategy. It is an objectively complex task that requires an understanding of the library’s mission, the program’s learning outcomes, and what the librarian needs to know. Quantitative data offers a shallow representation of patron learning and engagement that, while easily understood, can be difficult to interpret in terms of deep impact (Farrell & Mastel, 2016). Such data alone are not always helpful to libraries, whose mission and informal learning opportunities seek to inspire, motivate, and change the individual in addition to instilling content knowledge (American Library Association, 2014). Libraries must find a way to measure the personal gains that demonstrate our very purpose as a public institution, but it can be incredibly challenging to represent this change in a tangible way (Institute of Museum and Library Services, 2000). Determining assessment strategies that are well balanced and capable of capturing unobservable changes is an intimidating task that requires training or self-education.

Assessment can go awry even after a library takes initiative to implement the practice, due to the misuse of assessment in public libraries. As Rebecca Morris (2015) explained, there is a distinction between “assessment OF learning” and “assessment FOR learning” (p. 106). Librarians as educators and patrons as learners must be equal partners in assessment. Unfortunately, librarians who do implement assessment strategies often use them for the sole purpose of soliciting support or funding rather than the improvement of program design or content. Grants and other financial supporters do require evidence of community impact and learning to hold organizations accountable for the proper use of funding (Institute of Museum and Library Services, 2000). However, engaging in assessment as a regular practice is ineffective if librarians don’t also use it for the development and growth of their public services.

The current lack of assessment in public libraries is also a barrier in and of itself. Assessment is not inherently considered relevant to programming and patron learning. In fact, “assessment is often viewed as an ‘add on’ research effort or special project rather than as something integral to the operation of the library and management of its programs” (Kirk, 1999, p. 1). Because it is not an integrated practice, bringing assessment strategies into the public library requires a cultural shift among the staff and its management (Urban Libraries Council, 2016). Changing the culture of any workplace requires dedication and consistency, particularly when staff members are engrained in the current way of being and supported by lack of precedence in the field.

Libraries are incredibly diverse. They serve a variety of communities, cultures, and patrons with sometimes very different missions. Programming predominantly focuses on new knowledge, skills, or personal gains, yet varies in format and content. The extreme variety between libraries makes a standard approach to assessment exceedingly difficult to define (American Library Association, 2014). The studies that do examine or advocate standards for best practice are scattered and inconsistent, with various definitions of the same terms and conflicting purposes (Becker, 2015). There are professionals who firmly believe in the importance of assessment strategies in the public library and pursue this objective with forums or councils in the hopes of designing standards that support all libraries in that enterprise, yet there are currently no widely accepted standards or tools in place.

Conclusion

The concept of assessing learning is fairly new in public libraries, but despite the distinct literature gap on the subject, researchers and practitioners have started a conversation about the relevance of assessment strategies to effective public service. Assessment demonstrates the efficacy of programming for patron learning, its alignment with organizational values or mission, and how these things can be improved (Birnbaum, n.d.). Librarians can readily find precedents for successful assessment strategies in pioneering public libraries and academic settings (Becker, 2015). There are certainly drawbacks to each assessment strategy. Quantitative measures alone represent outputs—numerical data that only illustrate the physical results of a service, like attendance or circulation—rather than outcomes, the benefits patrons experience as result of their participation (Institute of Museum and Library Services, 2000). On the other hand, some librarians are concerned that qualitative strategies may violate patron privacy, which is a core ethical principle of librarianship (American Library Association, 2014). However, the disparate nature of assessment strategies is precisely why they are effective when integrated into a single approach. This flexibility allows libraries to work from their strengths and create an accessible entry-point to implementing assessment. This is the time to collaboratively work toward a standard for best practice and to demonstrate our necessity not only as providers of public services but also as agents of change.

 


Next Page

References

American Library Association. (2014). National impact of library public programs assessment. Retrieved from https://nilppa.org/

American Library Association. (2017). Programming & exhibitions. Retrieved from http://www.ala.org/tools/programming

Becker, S. (2015). Outcomes, impacts, and indicators. Library Journal. Retrieved from http://www.libraryland.com/article-outcomes-impacts-indicators/

Birnbaum, M. (n.d.). Outcome based evaluation basics. Institute of Museum and Library Services. Retrieved from https://www.imls.gov/grants/outcome-based-evaluation/basics

Blummer, B. (2007). Assessing patron learning from an online library tutorial. Community & Junior College Libraries, 14 (2), 121-138.

Council of the Great City Schools, Institute of Museum and Library Services, & Urban Libraries Council. (2016). Public partners for early literacy: Library-school partnerships closing opportunity gaps. Retrieved from https://www.urbanlibraries.org/assets/ULC_Opportunity_Gap_Report.pdf

Council of the Great City Schools, & Urban Libraries Council. (2016). Closing the opportunity gap for early readers. Retrieved from https://www.urbanlibraries.org/assets/Field_Scan_Report_-_ULC_Natl_Forum_on_Closing_the_Opportunity_Gap_for_Early_Readers.pdf

Diller, K. R., & Phelps, S. F. (2008). Learning outcomes, portfolios, and rubrics, oh my! Authentic assessment of an information literacy program. Libraries and the Academy, 8 (1), 75-89.

Farrell, S. L., & Mastel, K. (2016). Considering outreach assessment: Strategies, sample scenarios, and a call to action. In the Library with the Lead Pipe. Retrieved from http://www.inthelibrarywiththeleadpipe.org/2016/considering-outreach-assessment-strategies-sample-scenarios-and-a-call-to-action/

Gregory, L., & Higgins, S. (2017). Reorienting an information literacy program toward social justice: Mapping the core values of librarianship to the ACRL framework. Communications in Information Literacy, 11 (1), 42-54.

Institute of Museum and Library Services. (2000). Perspectives on outcome based evaluation for libraries and museums. Washington, DC: Institute of Museum and Library Services. Retrieved from https://www.imls.gov/sites/default/files/publications/documents/perspectivesobe_0.pdf

Kirk, T. G. (1999). Library program assessment. Paper presented at the ACRL Ninth National Conference. Retrieved from http://www.ala.org/acrl/sites/ala.org.acrl/files/content/conferences/pdf/kirk99.pdf

Lemke, J., Lecusay, R., Cole, M., & Michalchik, V. (2015). Documenting and assessing learning in informal and media-rich environments. Cambridge, MA: Massachusetts Institute of Technology.

Metropolitan Group, & Urban Libraries Council. (2015). Measuring summer learning in libraries. Retrieved from https://www.urbanlibraries.org/assets/ULC_Literature_Review_-_Natl_Forum_on_Effective_Summer_Learning_in_Libraries.pdf

Morris, R. J. (2015). Assessment for learning: Partnering with teachers and students. In School libraries and student learning: A guide for school leaders (103-123). Cambridge, MA: Harvard Education Press.

ORS Impact. (2017). Using Project Outcome with storytime and teen programs to improve programming and better meet community needs. Retrieved from http://www.ala.org/pla/initiatives/performancemeasurement/planocasestudy

Public Library Association. (2017). Performance measurement: Introduction to Project Outcome. Retrieved from http://www.ala.org/pla/initiatives/performancemeasurement

Urban Libraries Council. (2015). Leadership brief: Partners for education. Retrieved from https://www.urbanlibraries.org/assets/Partners_for_Education.pdf

Urban Libraries Council. (2016). Public libraries and effective summer learning: Opportunities for assessment. Retrieved from https://www.urbanlibraries.org/assets/Public_Libraries_and_Effective_Summer_Learning_web.pdf