Evaluating Humanitarian Action Guide

Although evaluation of humanitarian action (EHA) faces many of the same challenges of other sectors (e.g., international development), a number of these are intensified due to the volatile contexts in which humanitarians operate and the nature of the work undertaken. For instance, insecurity may limit access to programmes and affected populations, and work may be particularly time-sensitive in nature. How can we carry out sufficiently rigorous and credible evaluations in such contexts? Answering this question is at the heart of this guide from the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), which supports evaluation specialists and non-specialists in every stage of an evaluation, from initial decision to final dissemination. It paints the whole picture of evaluation in the sector, consolidating the current knowledge about initiating, managing, and completing an EHA. As such, it offers a common reference point for humanitarian evaluators.
Noting the importance for evaluations of answering the difficult question of "how are we really doing?" and assisting decision-makers in making the necessary course corrections or tough choices, ALNAP explains the specific rationale for this guide. There is increasing interest and investment in evaluations, as concerns are raised about the accountability and effectiveness of international development and humanitarian action. Evaluations are still a common tool for donors to assess accountability, but can also be used by organisations to learn and improve their programmes. There is now a critical mass of collective knowledge to build on - ALNAP's evaluation database alone contains over 2,000 evaluations covering the last 30 years. There is a need to create a common language and understanding of EHA in order to facilitate discussions within teams, organisations, and across organisations. Furthermore, although evaluations have become more common practice, relatively few are clear about their methodology. Where they are clear, humanitarian agencies and evaluators often restrict themselves to a small number of designs and methods. In addition, the commissioning of evaluations has shifted from agency head offices to field-based staff as agencies decentralise. Yet field-based managers often have little experience in planning and managing evaluations - especially EHA.
The EHA Guide, with a total of 18 chapters, is organised to reflect a typical evaluation process. It leads the user through the stages that the commissioning agency, evaluation manager, and evaluator would undertake from when the decision is made to do an evaluation, all the way until the dissemination of results. The core of the guide is 5 chapters that reflect the key stages in the evaluation process; each of these stages needs to be adapted to suit the context of the evaluation (considering factors such as: crisis, response, country, region, project or programme, or team) as well as the organisational context (for instance, "buy-in" from leadership, learning culture). These chapters are then broken down further into sections that walk evaluators through the specific activities in each of these stages.
For example, one section focuses on engaging with the affected population in an evaluation. It covers topics such as: planning to engage with the affected population, designing the evaluation to engage with the affected population, methods for engaging with the affected population, particular challenges in engaging with the affected population in EHA, and ethical issues in engaging with affected populations. As is the case throughout the Guide, text boxes explore "good practice" examples, such as one from Haiti. Nine months after the earthquake there, CARE International and Save the Children Fund commissioned a joint independent evaluation of their humanitarian response. While the evaluation used Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) evaluation criteria (Relevance, Effectiveness, Efficiency, Impact, and Sustainability) and cross-cutting themes to assess the aid efforts to date, it also provided a snapshot of how different groups that were representative of Haitian society perceived and experienced the areas of the global humanitarian response. This was done using the People First Impact Method, which involved national staff from CARE and Save the Children being trained over 2 days to build their communication, listening, and facilitation skills, and then to conduct focus group discussions using thematic open-ended questions. This meant that the discussion could be non-prescriptive, extending beyond the work and projects of either agency, in order to gain knowledge about people's real-life experience and thus to help answer the two agencies’ questions: "Are we doing the right things?" and "Are we doing things right?"
Quality evaluation processes are not linear. Thus, the EHA Guide emphasises the importance of having a utilisation focus. Evaluation results should be feeding into future humanitarian work by informing decision-making, facilitating learning, identifying areas for improvement, and pointing to other possible research or evaluation questions. To reflect this, the evaluation stages are represented as a loose loop, rather than a timeline. Furthermore, evaluation processes have a number of feedback loops, and there may be some back and forth between activities. Within the guide, these links between sections are highlighted with cross-referencing links. The bibliography and index are available on the ALNAP website.
The process of developing this EHA guide began in 2009 with a Humanitarian Practice Network (HPN) survey. A pilot version of this Guide was first released in June 2013, following a 3-year drafting process. Fifteen thousand EHA Guide downloads later, ALNAP has gathered feedback from more than 40 organisations participating in the pilot process who tested its content on the ground. This feedback was incorporated into this final EHA Guide.
- Click here to watch the launch event and panel discussion on the key issues in evaluation of humanitarian action.
- You can connect with other evaluators in the ALNAP Network and ask questions about any EHA issues by joining the Humanitarian Evaluation Community of Practice.
- The video below features evaluators discussing what has worked for them and what hasn't when evaluating humanitarian action. To view that and other guide-related videos, visit the ALNAP website.
429
ALNAP website, October 21 2016. Image credit: World Food Programme (WFP)
- Log in to post comments











































