Overview
1.1 Purpose
UNHCR applies the following UN definition of evaluation:
“An evaluation is an assessment, conducted as systematically and impartially as possible, of an activity, project, programme, strategy, policy, topic, theme, sector, operational area or institutional performance. It analyses the level of achievement of both expected and unexpected results by examining the results chain, processes, contextual factors and causality using appropriate criteria such as relevance, effectiveness, efficiency, impact and sustainability.1 An evaluation should provide credible, useful evidence-based information that enables the timely incorporation of its findings, recommendations and lessons into the decision-making processes of organizations and stakeholders”.
Evaluation informs choices made at all levels of the organization in strategic planning, programming and decision-making based on timely, credible and impartial evidence. This evidence will reflect, directly and indirectly, the views and perspectives of persons of concern to UNHCR and host communities regarding the protection and assistance provided by the organization
Emergency evaluations analyze the extent to which UNHCR (as a whole) is providing a relevant, timely and effective response to a country or regional humanitarian crisis, taking into consideration the complex enabling and constraining factors in the country/region.
The objectives of emergency response evaluations can be broadly categorized as follows:
- Strengthening the design of UNHCR’s operations as well as its emergency preparedness– through assessing the extent to which UNHCR’s strategy in the country was relevant to the most important needs of the refugee and IDP population, corresponded to the agency’s areas of strength, and responsibility, and took into consideration the capacities and operations of other partners;
- Improving UNHCR’s performance and the results achieved for PoC (Persons of Concern)– through an analysis of the interventions, partnerships, immediate results and potential for longer term impacts of UNHCRs activities;
- Helping UNHCR further strengthen its policies, guidance and systems, humanitarian-development nexus and approaches, to better respond to emergencies – drawing lessons from the whole-of-UNHCR response.
- Providing UNHCR with a credible source of objective information on the organizations performance that can be used to provide assurance and accountability to partners.
1.2 Guiding Principles and Criteria
Evaluation in UNHCR is founded on the fundamental principles of impartiality, credibility and utility. These principles, which are connected and mutually reinforcing, subsume a number of specific norms that guide UNHCR’s work in commissioning, conducting and supporting the use of evaluation.
Evaluations, including emergency response evaluations, are guided by gender, equity and human rights considerations, in accordance with UN-wide norms, standards and guidance. UNHCR draws on the criteria of the OECD DAC and ALNAP, adapting these for evaluating humanitarian action. UNHCR does not select or apply the criteria mechanistically but adaptively to ensure that the evaluation reflects good practice standards and the needs of evaluation users.
Evaluations must be conducted in full respect of UNHCR’s age, gender and diversity policy and UNHCR’s commitment to accountability to affected populations8 to ensure that all groups and identities within the populations of concern have equitable opportunities to be involved, and to contribute to the evaluation, irrespective of age, gender, ethnic, political or religious affiliation, or sexual identity.
Relevance for emergency operations
2.1. Management of an L3 Emergency Evaluation
Level 3 emergency evaluations are commissioned and managed by the Evaluation Office. In line with UNHCR’s Evaluation Policy, evaluations shall be conducted no later than 15 months from the L3 declaration. All L3 emergency response evaluations are conducted by independent external evaluators and funded through L3 emergency funding appeals.
Depending on the evaluation objectives and the operational context, the evaluation can start earlier than 15 months to ensure that they can contribute to real-time reflection and help assess the implementation of the Real-Time Review’s recommendations.
Main steps:
After the L3 declaration, the Head of the Evaluation Office appoints an Evaluation Manager.
Upon finalization of the Real-time Review, the Evaluation Manager will initiate discussions with the relevant divisions in HQ, the Regional Bureau and the Country Office to agree on evaluation objectives, scope, and timeline.
The Evaluation Office will contract an independent Evaluation firm, the service provider, to carry out the evaluation.
An Evaluation concept note (2 pages) and an evaluation Terms of Reference (ToR) will subsequently be drafted and agreed upon.
The Evaluation Manager will be responsible for managing the day-to-day aspects of the evaluation process. The UNHCR Country Operation/Regional Bureaux will appoint an evaluation Focal Point.
Management at global, regional and country level support the evaluation through all. They are requested to:
- Inform all relevant parties in advance (UNHCR colleagues, UN and NGO partners, government, donors), introduce the evaluation team, and explain that staff and partners may be approached for interview.
- Allocate a focal point on substance, as well as for logistical and administrative support.
- Make available to the evaluation team documents that capture key developments, decisions and results relating to the emergency operation. Assist the evaluation team to set up interviews with key stakeholders.
- Interviews should include persons of concern and should adopt an age, gender and diversity (AGD) approach. Representative(s) and country office(s) may also be requested to provide logistical support (subject to operational constraints). To safeguard the credibility and objectivity of the evaluation, UNHCR staff may not participate in any of the evaluation team interviews.
Throughout the process, UNHCR Evaluation Office will set up a library of background documents and data to share with the evaluation team, that will be continuously updated.
2.2 Scope of the Emergency Evaluation
The operational scope of emergency response evaluations is not fixed, and the Evaluation Office will discuss this in collaboration with the relevant Country Operation(s), Regional Bureau(x) and Headquarters. However, it is important to remember that the operational scope of the evaluations must include all levels of the Organization, since a L3 emergency response triggers a ‘whole-of-Organization’ response.
The geographical scope should mirror the operational scope. It may focus only on the areas of emergency response, or on the entire country/region. A decision on whether to include regular programs (e.g. non-emergency-related) in the scope of the evaluation can be made on a case-by-case basis and will be defined in the ToR.
All of the above is also applicable for evaluations of L1 and L2 emergency responses, although the time and operational scope will be more limited. Please refer to the UNHCR Policy on Emergency preparedness and response for the key features of the L1 and L2 responses.
In some cases, evaluations of UNHCR emergency responses may be complemented by inter-agency humanitarian evaluations which focus on system wide performance, operational barriers and achievements.
Main guidance
3.1. Terms of Reference / Key Areas of Inquiry
The OECD DAC evaluation criteria are the pre-eminent criteria for evaluating development and humanitarian assistance. As updated in 2019, the six criteria are: effectiveness, relevance, efficiency, impact, sustainability and coherence. ALNAP’s 2006 guide interprets the criteria for application in humanitarian action as: effectiveness, appropriateness/relevance, efficiency, impact, coverage, coherence and connectedness.
Additional criteria may include cross-cutting themes, such as coordination, accountability to affected populations, age, gender and diversity. The weight of each criterion will be discussed with the UNHCR Evaluation Manager during the inception stage.
Given time and operational constraints, the evaluation should focus on priority questions that can help to provide evidence of UNHCR’s scaled-up response and humanitarian assistance. Indicative Evaluations questions may include:
- How responsive was the UNHCR strategy to the external environment, including the evolving context and role of Government and other humanitarian and development actors (UN, NGO, civil society/local actors)?
- To what extent has UNHCR followed up upon the Real-Time Review (RTR) recommendations to the evolving crisis?
- Given the availability of resources, to what extent is UNHCR’s response meeting the needs of the the most vulnerable? How effective has its targeting strategy been?
- To what extent has UNHCR achieved intended outputs and targets and contributed to envisaged outcomes? Are there any indications of unintended or negative effects of the L3 response?
- How efficient and timely was the response e.g. the extent of scale up across business processes, HR, procurement, funding, technical support, and otherwise? Were internal accountabilities clear?
- To what extent are successes and failures in the response attributable to system-related factors? What role did key UNHCR entities play in this (divisions, bureaux, HQ, operation)?
Areas of recommendation typically consider:
- What more should be done? What should be done differently to enhance UNHCR response programming for refugees, IDPs, and their communities?
- What can UNHCR learn from its preparedness efforts? And what lessons can be used at the regional level and in other regions of UNHCR operations?
- How can UNHCR adapt its emergency response to longer-term recovery and reintegration needs?
Due to the specificity of each evaluation, Evaluation Questions are discussed and formulated at the stage of ToR development. They are presented here for indicative purposes only.
UNHCR encourages the use of a standard Evaluation Matrix. This forms the main analytical framework for an evaluation. It sets out how each evaluation question and evaluation criteria will be addressed. It also breakdowns the main questions into sub-questions, mapping against them data collection and analysis methods, indicators or/and lines of inquiry, data collection tools and sources of information. The Evaluation Matrix serves to indicate where secondary data will be used and where primary data will need to be collected. Furthermore, it guides analysis, ensures that all data collected is analyzed and triangulated, thus helping to increase credibility and reduce subjectivity in the evaluative judgement.
3.2. Methodology
For each emergency evaluation, UNHCR’s expectations will be presented in the ToR, and the most appropriate methodology will be discussed on a case-by-case basis.
Nevertheless, evaluations should always strive to employ a mixed-methods approach that relies on a variety of primary and secondary sources. Given the nature of emergency evaluations, data collection and analysis methods that are considered time and labor- intensive (i.e., economic modeling, large-scale surveys, systematic reviews) may be less appropriate. The approach should always be participatory, but the number of Key Informant Interviews and Focus Group Discussions may be reduced or, where possible, conducted remotely to reduce the evaluation mission footprint.
3.3. Deliverables and timeline
Key deliverables are indicated below. These are indicative and may change according to the evaluation scope, timeline, and expectations of UNHCR, as indicated in the ToR.
Key deliverables of emergency response evaluations include:
- Inception report (7-15 pages) and a comprehensive desk review (10-15 pages) - confirming the scope of the evaluation, the evaluation questions, methods to be used, all data gathering tools, as well as the analytical framework – and summarizing findings derived from a review of existing documentation;
- Presentations/Debriefs after inception phase, field mission (or remote data collection) and final report submission, including a ppt or aide memoir;
- Workshops with relevant staff in HQ, Regional Bureaux and CO staff, to validate the findings; Draft and Final evaluation reports (30-50 pages), including a 4-6-page stand-alone executive summary;
- Copies of data collection tools, training material, and any other material produced during the evaluation period
- Any communication material planned for this evaluation (e.g. infographic, podcast)
An indicative timeline for an emergency evaluation – from the point at which the external evaluation team has been recruited/contracted:
3.5. Management Response
A management response will be prepared within 3 months of the finalization of the evaluation report. The office of the Assistant High Commissioner, Operations is responsible for coordinating and clearing the management response to the L3 emergency evaluation – which will benefit from inputs from concerned COs, RBs and Divisions as relevant and will support reporting after 12 and 24 months on the implementation of actions committed to.
3.6. Evaluation Reference Group
The evaluation manager and evaluation team will be supported by an Evaluation Reference Group, which will act in an advisory capacity and comprise a representative panel of primary users of the evaluation. The ERG usually includes several external stakeholders. Representatives of the people UNHCR works with and for should, where feasible, be members of the Evaluation Reference Group.
The Group is expected to assess the quality of the evaluation work and provide feedback, notably during specific meetings and workshops organized during the evaluation process. The collaborative framework of the Evaluation Team and Evaluation Reference Group (as well as its membership) will be specified during the early stage of the Inception phase.
3.7. Audiences
The primary audiences of emergency evaluations conclusions and recommendations are Senior Executive Team, Director of Emergency, Security and Supply (DESS) and other relevant divisions (DIP, DSPR, DRS etc.) and clusters (GPC, CCCM, etc.), relevant Regional Bureau Director and Country Representative(s)
Secondary audiences include the UN and Government partners, donors, targeted populations, and implementing partners.
Emergency Operation Evaluation – Management Action Points
When heading an emergency operation, expect and make time for a real-time review within 3 months and an evaluation within 15 months of the start of the emergency response.
Nominate a focal point (FP) within the operation to cooperate with the HQ evaluation manager on content and an admin focal point for logistical needs.
The FP is tasked to compile documents, data and information relevant to the agreed scope of the evaluation.
The FP will identify stakeholders as key informants for the data collection phase (evaluation team field visit) and alert the KIs of the request to participate.
Ensure that staff in the operation are informed of the formal initiation of the evaluation, the evaluation team, the timeline and expected contributions.
Attend briefings/de-briefing with the evaluation team, review the draft report and confirm which recommendations are accepted/partially accepted or rejected and the reason.
Provide the management response to the report and the agreed follow-up action.
Links
Main contacts
Head of UNHCR Evaluation Office: [email protected]
In this section:
Let us know what you think of the new site and help us improve your user experience….
Let us know what you think of the new site and help us improve your user experience….