The Benchmarking Book
With growing demands for increased operational efficiency and process improvement in organizations of all sizes, more and more companies are turning to benchmarking as a means of setting goals and measuring performance against the products, services and practices of other organizations that are recognized as leaders.
The Benchmarking Book
The Benchmarking Book is an indispensable guide to process improvement through benchmarking, providing managers, practitioners and consultants with all the information needed to carry out effective benchmarking studies.
Find in a library
_OC_InitNavbar("child_node":["title":"My library","url":" =114584440181414684107\u0026source=gbs_lp_bookshelf_list","id":"my_library","collapsed":true,"title":"My History","url":"","id":"my_history","collapsed":true,"title":"Books on Google Play","url":" ","id":"ebookstore","collapsed":true],"highlighted_node_id":"");Benchmarking HandbookB. Andersen, P.-G. PettersenSpringer Science & Business Media, Dec 31, 1995 - Business & Economics - 192 pages 1 ReviewReviews aren't verified, but Google checks for and removes fake content when it's identifiedBenchmarking is a powerful tool for improvement. It is one of the fastest-growing techniques for quality and performance improvement and attracts massive attention. Now, more than ever, there is a clear need for straightforward guidelines to help companies make the most of benchmarking. This book addresses that need. if (window['_OC_autoDir']) _OC_autoDir('search_form_input');Preview this book What people are saying - Write a reviewReviews aren't verified, but Google checks for and removes fake content when it's identifiedUser Review - Flag as inappropriatebench
The Benchmark Books for Levels aa-J are one of three parts in a process that assesses reading behavior and comprehension. Using the books with their running records, Retelling Rubrics, and Comprehension Quick Check Quizzes provides an accurate measure of students' reading abilities.
We've imported the test crate, which contains our benchmarking support.We have a new function as well, with the bench attribute. Unlike regulartests, which take no arguments, benchmark tests take a &mut Bencher. ThisBencher provides an iter method, which takes a closure. This closurecontains the code we'd like to benchmark.
There's another tricky part to writing benchmarks: benchmarks compiled withoptimizations activated can be dramatically changed by the optimizer so thatthe benchmark is no longer benchmarking what one expects. For example, thecompiler might recognize that some calculation has no external effects andremove it entirely.
The benchmarking runner offers two ways to avoid this. Either, the closure thatthe iter method receives can return an arbitrary value which forces theoptimizer to consider the result used and ensures it cannot remove thecomputation entirely. This could be done for the example above by adjusting theb.iter call to
To further ensure proper leveling, the books were vetted by a team of experienced classroom teachers, and Heinemann conducted a formal field study of the leveling that involved a broad spectrum of students across the U.S.
Covering everything from essential theory to important considerations such as project management and legal issues, The Benchmarking Book is the ideal step-by-step guide to assessing and improving your company's processes and performance through benchmarking.
The benchmarking pilot project had been chosen collaboratively by our managers after review and discussion of the results of several user surveys conducted in the spring of 1998. The library had just completed its triennial comprehensive student survey of library services, as well as two SERVQUAL surveys, one in the main library and one in the fine arts library. Both the service ratings and the comments from respondents pointed to our reshelving process as an area needing improvement.
Team members were chosen by Management Information Services 2 staff partially by considering staff members who had similar experience on a previous -process--improvement team. We also looked at having representation from several departments and service units. Two team members were from Management Information Services to provide statistical skills and to provide continuity for future benchmarking projects. Additional team members were drawn from the Cataloging Department, the Science/Engineering Library, Social Sciences Services, and the stacks staff of the main and music libraries for a total of seven team members. It is crucial to have on the team staff who work in the area to be studied.
"The project this Team is charged to undertake is benchmarking the shelving/reshelving process in all University Library service units. The project should include these processes: map and measure the current process in each library
Benchmarking is a process improvement tool that has been used by the business community for over a decade, but it has only recently migrated to the non-business academic community. This made it difficult to find both information about the process as it relates to libraries and information about other benchmarking projects on the shelving process specifically.
As we had only a semester to complete the project, we undertook parts 3 and 4 of the process simultaneously. We needed to learn more about our own shelving process in each library in the system. Since there were minimal data available on our shelving process, we began to flowchart the process in the eleven libraries at UVa and in the Government Documents department. We also began to work on a survey instrument that would help us gather data about the process as practiced at the University of Virginia Library. We tested this questionnaire by interviewing a few stacks supervisors. The outcome of the test was messy at best. It was necessary to revise the questionnaire several times in order to garner more usable answers (see Appendix 2). We learned how each location shelved its books and journals as well as some of the factors that contributed to the shelving process, such as training, number and level of employees, pay rates, LEO (on campus) delivery, new book routines, pick-up routines, sorting areas, etc. We discovered that many of our libraries already shelved excellently-by the end of each day.
Another component of the project was to identify those institutions that exhibited best practices for shelving. The literature on the shelving process was as sparse as the literature on benchmarking in libraries. Unlike the business world, there was no place to go to determine which library had best practices in shelving; there were no Malcolm Baldridge Award winners among libraries. It should be noted that we also considered comparing our process to similar processes in the business world, such as stocking grocery shelves or shoe stores, or refiling videos at a video rental store. We learned quickly that businesses are not as willing to share information about how they operate as libraries are. On the contrary, the library culture assumes the sharing of information.
Communicating the process to staff and stakeholders is a key element of benchmarking. Our contact at Penn State assisted us in finding a benchmarking consultant, Gloriana St. Clair, director of the libraries at Carnegie Mellon University, who could help us with this. She graciously consented, on very short notice, to present basic benchmarking information to the entire library staff. She also assisted the team in revising the -local--practices questionnaire and in deciding which institutions exhibiting best practices it would be best to visit. She suggested that the team was moving toward its objective at a good pace in spite of our reservations about lack of training in the benchmarking process-and that we needed to "just get on with it."
We began planning for site visits to the University of Arizona in Tuscon and to Virginia Polytechnic Institute and State University in Blacksburg. The site visits were essential for understanding how the best practices really worked. There is no substitute for walking through a process and having an opportunity to ask questions along the way. In addition, the host libraries were asked to fill out the same survey that had been completed by our own stacks staff. This allowed us to identify procedures that were alike and different, and thus point to how our process could be improved. We were also fortunate to be very graciously received by staff of both institutions. Often institutions that have been identified as having best practices are inundated with requests for benchmarking data and site visits. Over time, these institutions become less cooperative since the benchmarking process requires work on their part as well. We felt privileged that our partners were so willing to help us.
While two team members and our AUL for User Services conducted the out-of-state site visit, the other five team members measured several things for which we had no data: how much we shelved (number of books and journals), what our turnaround time was (from return desk to shelf), how accurately we shelved, and what the turnaround time was for pick-ups.5 Our MIS programmer developed the protocol and ran reports against our Sirsi database, with which the studies were done. For example, for one measurement we produced a list of call numbers of books checked in on a particular day for each of our larger libraries. These were libraries where we were not sure how long it took to shelve the books: the main library, science/engineering, the undergraduate library, fine arts, and government documents. The lists were checked each day until all (or nearly all) the books were found in the right place. Notations were made about when the book was found and whether it was in the correct place. We found that turnaround time ranged from 1.3 to over 5 days. Some items were not found shelved during the study. For this study, team members carried out the measurement in libraries other than their home libraries in order to avoid influencing the results. 041b061a72