WHO training package on environmental cleaning: Evaluation methodology guide – World
Introduction
1.1 Purpose and audiences
This purpose of this guide is to inform robust evaluations of the WHO training package – a package aimed at personnel whose primary role in health-care facilities is environmental cleaning, hereafter referred to as cleaners.
The WHO training package – Environmental cleaning and infection prevention and control in health-care facilities in low- and middle-income countries – was designed to improve the competencies of cleaners through a practical, educational approach for adult learners in low- and middle-income countries and comprises two volumes: trainer’s guide and modules and resources (1,2). An associated OpenWHO online course describes the essential preparations for trainers to deliver the WHO training package.
The evidence base on environmental cleaning needs to be strengthened overall in terms of determinants, interventions and consequences and as a core component of infection prevention and control (IPC) in relation to the built environment (3). For example, several recent systematic reviews assessing interventions to improve environmental cleanliness have identified only small-scale pilot studies in resource-limited settings (4–9). Evaluations of interventions to improve environmental cleaning in health-care facilities in lowand middle-income countries, such as the WHO training package, provide an important opportunity not only to learn lessons of local relevance but also to help strengthen the broader evidence base if attention is paid to the important methodological considerations that ultimately determine the reliability of findings.
The purpose of this guide is ultimately to strengthen evaluations of the WHO training package, so that robust and useful lessons can be learned for future implementation, adaptations and assessments of the package. The guide seeks specifically:
• to highlight key methodological considerations for process evaluation that account for the delivery of the intervention and how this contributes to the measured effects and/or impact evaluation (sometimes called outcome evaluation), which assesses the effects of intervention in terms of final endpoints; and
• to promote the use of mixed methods (qualitative, quantitative and microbiological) in evaluation to capture the multiple facets of the WHO training package as a complex intervention.
The intended audience for this guide is primarily those tasked with designing and delivering a process and/or impact evaluation, whether in the context of a research study or a type of programme audit, as discussed further later (see Table 2). It is therefore assumed that most readers will be familiar with the usual stages of evaluation. Other stakeholders, such as those who commission the evaluation, may consult just the introductory Sections 1 and 2 of the guide but are less likely to focus on the technical aspects in Section 3. The diverse needs of stakeholders besides those conducting the evaluation should be recognized in what should be an iterative, co-design activity and thus manage varying expectations about the goals and outputs (10).
» For implementers of the training package, an evaluation is a means of learning whether the desired change in the main outcomes has been achieved, whether this represents a worthwhile improvement, whether the training caused any unintended negative effects and how the delivery of the training worked in practice and can be sustained.
» For funders or budget holders, an evaluation can provide feedback on whether providing the financial and human resources needed for the training was justified and can help in making decisions to fund future work.
» For those responsible for broader IPC and activities to improve the quality of care, the evaluation can be a resource for learning and sharing knowledge about barriers and facilitators of change in a specific context.
link