ePrivacy and GPDR Cookie Consent by TermsFeed Generator
NEW DATES OF THE 14th EES CONFERENCE: 6-10 JUNE 2022

Professional Development Workshops

EES is pleased to offer the following Professional Development Workshops (PDWs) as part of its 14th Biennial Conference. Please note, that each workshop is a full day workshop.


Monday 6 June 2022

Blue Marble Evaluation

Michael Patton  

Blue Marble refers to the iconic image of the Earth from space without borders or boundaries, a whole Earth perspective. Blue Marble Evaluation consists of principles and criteria for evaluating transformational initiatives aimed at a more equitable and sustainable world.
We humans are using our planet’s resources, and polluting and warming it, in ways that are unsustainable. Many people, organizations, and networks are working to ensure the future is more sustainable and equitable. Blue Marble evaluators enter the fray by helping design, implement, and evaluate transformational initiatives based on a theory of transformation.
Blue Marble evaluation is utilization-focused, developmental, and principles-based in providing ongoing feedback for adaptation and enhanced systems transformation impact.
Incorporating the Blue Marble perspective means looking beyond nation-state boundaries and across sector and issue silos to connect the global and local, connect the human and ecological, and connect evaluative thinking and methods with those trying to bring about global systems transformation. Forecasts for the future of humanity run the gamut from doom-and-gloom to utopia. Evaluation as a transdisciplinary, global profession has much to offer in navigating the risks and opportunities that arise as global change initiatives and interventions are designed and undertaken to ensure a more sustainable and equitable future. This workshop will provide a framework and tools (a thoughtkit) for evaluating global systems transformation.


Between X and Y: Analyzing Causal Mechanisms in Evaluation

Johannes Schmitt  and Martin Noltze

Why should we care about analyzing causal mechanisms? The primary argument for investing in analysing causal mechanisms as part of evaluation work is their capacity to answer questions as to why and how programs worked or failed to work. Commissioners, and evaluators interested in learning more about the modus operandi of a program may open the “black box” and expose the causal mechanisms at work. Particularly in challenging times of shifting context conditions, analysis of causal mechanisms can be key to readjust complex programs and make policies more effective.
But how to go about causal mechanism analysis? What are the options and how to apply them in practice? Building on the two trainers’ profound background knowledge, this 1-day Professional Development Workshop will provide the basics and jump right into analysing causal mechanisms using Realist Evaluation and Process tracing methodology.

The Workshop will bring together evaluators and commissioners of evaluation. The overall workshop objectives are:

  • participants know why and in what situations causal mechanism analysis is appropriate in evaluation;
  • participants know the main concepts and are able to distinguish between different types of causal mechanism analysis based on the evaluation interest and different causality concepts;
  • participants can assess and compare the practical implications (time and cost) of different causal mechanism approaches;
  • participants understand how causal mechanism analysis can be conducted in Theory-Based Evaluation, particularly using Process Tracing and Realist Evaluation;
  • participants will be able to assess the added value of analysing causal mechanisms with regard to policy relevance and methodological rigor from the point of view of their own role and responsibilities.

Complexity-Informed Evaluation – An Exploration in Understanding Pattern, Predictability, and How Change Happens

Jonathan Morell

The content of this workshop will be a succession of lectures, group discussions, and breakout exercises to provide participants with the understanding needed to recognize how complex behavior might play a role in the models they develop, the methodologies they devise, and what messages they extract from their data. Without this understanding, evaluators cannot correctly describe what programs are doing, and why. If they get it wrong, so too will evaluation users and stakeholders.
Understanding the application of Complexity Science to Evaluation requires understanding the relationship between specific constructs and general themes. Each specific construct is useful in its own right, but its full value lies in appreciating the epistemology in which that theme is embedded. This workshop will treat both specific constructs that can be applied to evaluation, and the epistemological themes reflected by the application of those constructs. The general themes are: 1) patterns in observed consequences of program and policy behavior,2) predictability of those consequences, and 3) reasons why those changes came about. The specific constructs will include: stigmergy, attractors, emergence, phase transition, self-organization, and sensitive dependence.


Concepts, Design Strategies, and Methods for Evaluating Advocacy and Policy Change Initiatives

Annette L Gardner

Several factors have fueled the need for skilled evaluators that can design appropriate advocacy and policy change (APC) evaluations to meet diverse stakeholder needs: increased foundation interest in supporting APC initiatives to achieve transformational, systems-level change; evaluation of democracy-building initiatives worldwide; and diffusion of advocacy capacity beyond the traditional advocacy community (such as service providers). And evaluators have met these needs with great success, building a new field of evaluation practice, adapting and creating evaluation concepts and methods, and shaping advocate, funder and evaluator thinking on advocacy and policy change in all its diverse manifestations. The field of APC evaluation has matured and now has a rich repository of guides, instruments, and a book to support evaluation practice.
The aim of this Workshop is to expand evaluator capacity to design tailored advocacy and policy change evaluations under diverse scenarios. Concepts and definitions, case studies, designs, and tailored measures are combined into a practice-focused workshop. Using the definitive book on the topic, Advocacy and Policy Change Evaluation: Theory and Practice (Gardner and Brindis) as a guide, core principles important to understanding advocacy and policy change initiatives will be discussed. Participants will also discuss options for addressing the challenges associated with evaluation practice, such as the complexity and the moving target of the context in which advocacy activities occur, and the challenge of attribution and identification of causal factors. Unique advocacy and policy change evaluation methods and measures and their application will be described. Last, the designs and insights from the six evaluation cases described in the book, ranging from an Initiative to Promote Equitable and Sustainable Transportation to a girls empowerment initiative, the Let Girls Lead’s Adolescent Girls’ Advocacy and Leadership Initiative (AGALI) will be used to illustrate the real life scenarios evaluators are likely to encounter.


Hands-on Introduction To Outcome Harvesting

Goele Scheers and Richard Smith

Outcome Harvesting (OH) is a participatory monitoring and evaluation approach that is used to collect (“harvest”) evidence of what has changed (“outcomes”) and, then, working backwards, to determine whether and how an intervention has contributed to these changes. It is being used worldwide to robustly identify, describe, analyse and interpret outcome-level results regardless of whether they were pre-defined. Outcome Harvesting has proven to be especially useful in complex situations when it is not possible to define concretely most of what an intervention aims to achieve, or even, what specific actions will be taken over a multi-year period. Its use helps ensure outcome-level evaluations are firmly and transparently based on credible evidence as defined by the needs of primary users.
With publication of the first book on Outcome Harvesting, the lead-developer of OH, Ricardo Wilson-Grau, shared much of his experience and reasoning on why and how it is always necessary to adapt OH with each application. This training will be delivered by two experienced OH users who contributed to the book and is intended for those wishing to start working with Outcome Harvesting who need an understanding of the 6 OH steps and to develop their skills in identifying and formulating outcomes through practical exercises. The two facilitators know there is an increasing demand from organisations and consultants within Europe to be trained in Outcome Harvesting. For instance, in Denmark, where the EES conference will take place, there is a wide-spread adoption of Outcome Harvesting among CSOs which is creating a continuing demand for training of new personnel.


How To Address Environmental Sustainability In Your Evaluations

Andy Rowe, Patricia Rogers, Jane Davidson, Dugan Ian Fraser

While there is now greater awareness and willingness to address environmental sustainability in evaluations, there is considerable uncertainty and concern about how to do this. This workshop is designed to support evaluation practitioners and commissioners and will provide the approaches, tools and insights for incorporating consideration of environmental impacts and sustainability into evaluations that currently focus only on human systems.The workshop seeks to advance the contributions of evaluation to a sustainable planet by joining the considerable knowledge and experience of participants with the tools and approaches developed by Footprint Evaluation. The benefits will occur in the knowledge and expertise of both participants and members of Footprint and improvements to current and future tools, methods and processes from Footprint.
The workshop will cover evaluation undertakings from start to end, from commissioning and designing through to reporting and communications. It will provide a checklist to determine if you should include natural systems in an evaluation; approaches to incorporate biophysical science knowledge and identify and engage key stakeholders from both systems; key evaluation questions that will enable you to use evaluation criteria such as the OECD-DAC to explicitly address the both human and natural systems; guidance on mapping the reach of the evaluand to natural as well as human systems and using this mapping to incorporate sustainability into the Theory of Change; a typology and rubrics to evaluate the impact of your evaluand on natural systems; information sources and methods not usually included in evaluation such as modelling, GIS, scenario simulations, storytelling and how to address the temporal and spatial scales relevant for both human and natural systems. These tools and approaches will enable evaluation practitioners to address sustainability in their evaluations.


Monitoring AS Evaluation – Real Time Approaches for Real Challenges

Scott Chaplowe

The concept of a “new norm” is an oxymoron implying a new equilibrium where life will resume a degree of regularity. Our world is rapidly changing and the “new norm” will be increasing disruption. With the increased frequency and magnitude of change, humanitarian and development evaluation is being pushed beyond the boundaries of conventional methodologies to explore alternative approaches that embrace the volatility, uncertainty, complexity and ambiguity (VUCA) that characterized the complex contexts interventions are delivered and evaluated. Periodic, eventful evaluations packaged according to the common baseline-midterm-final evaluation recipe are not fit-for-purpose to respond and adapt to the needs of rapidly changing implementation.
This workshop will explore the different ways evaluation can be pursued to support emergent, real-time learning to understand, analyze, and act in rapidly changing and unfamiliar contexts. It will stress monitoring as evaluation to support course-correction and adaptive management. This includes a variety of approaches for timely evaluation to assess and respond quickly to contextual changes and determine whether planned results are occurring as intended. MasE also interrogates any unplanned consequences on the larger human and natural ecosystem so we can respond in a timely fashion, whether to capitalize on opportunities (if a positive unintended outcome) or mitigate damage (if a negative unintended outcome). The workshop will highlight how real-time communication of evaluation evidence and findings can increase responsiveness and uptake, and the critical importance of participatory engagement for key stakeholders to interpret and respond in real time.


Participatory Evaluation. Concepts, Method and Practice

Esteban Tapella, Pablo Rodríguez Bilella, Jutta Blauert

This workshop concentrates on Participatory Evaluation (PE). As we will see it, this is not just a question of asking stakeholders to take part or give their points of view. PE is about radically rethinking who initiates and undertakes the evaluation process, and who learns or benefits from its findings. Involving everyone affected, it changes the whole nature of the evaluation. In a PE different actors involved in the intervention define what will be evaluated, who and when will participate, which data collecting and analysis methods will be used, and how results will be communicated. In this approach, professional evaluators, project staff, project beneficiaries or participants, as well as other community members, all become colleagues in an effort to improve the community's quality of life. Fundamentally, PE is about sharing knowledge and building the evaluation skills of program beneficiaries and implementers, funders and others. The process seeks to honor the perspectives, voices, preferences and decisions of the least powerful and most affected stakeholders and program beneficiaries.
The workshop roots the evaluation framework that we have developed in our book “Sowing & Harvesting. Participatory Evaluation Handbook”, published by DEval 2021 in Spanish and English. This will help to understand that context matters and that any PE project should be collaboratively designed and developed on the basis of stakeholder information needs and interests. During the workshop I will present the various ways by which participatory approaches to Evaluation can be implemented, such as: self-assessment, stakeholder evaluation, internal evaluation, joint evaluation and so on, but I will concentrate mainly in our PE approach. We will also illustrate how different tools can be used, such as: individual story-telling, PE card games, participatory social mapping, causal-linkage and trend and change diagramming, scoring, and brainstorming on program strengths and weaknesses, etc.


Tuesday 7 June 2022

Career Development Training For Young and Emerging Evaluators

Alena Lappo, Mariana Branco, Taruna Gupta

High quality and relevant evaluations used for evidence-based policy making is vital for achieving sustainable and equitable development especially in the context of the COVID-19 pandemic. To produce quality evaluations, competent and skilled evaluators should be available. However, an ongoing challenge remains for evaluation across the globe: the pool of talented evaluators is at time shallow, and demand exceeds supply. Encouraging youth to enter the evaluation profession has been recognized as one effective strategy to address this through the inauguration of EvalYouth movement in 2015.
Developing evaluation capacities of YEEs must be based on a systemic approach that takes into account three interdependent levels: individual, institutional, and external enabling environment, and two components: demand and supply. EvalYouth Global Network, UNFPA Evaluation Office, P2p+ initiative and Global Evaluation Initiative led a partnership to develop a training programme on “career development in M&E for YEEs”. This workshop builds on the partnership training products and on the EES competency frameworks to develop capabilities for evaluation.
The module of the developed training will be delivered in the EES conference as 1 day workshop and will provide young evaluation professionals and novice evaluators (YEES) with an overview of the evaluation landscape and possible career paths in evaluation. It will also give practical tips on successful participation in the conference.


Community-led Monitoring and Evaluation: Principles and Tools

Gunjan Veda, Matthew Cruse, Molly Wright

As the decolonization and the locally-led development agendas gain traction globally, there is an increased focus on the need for Community-led Monitoring and Evaluation (ColMEL). Two recent studies - a landscape analysis of 173 CLD programs from 65 countries and a rapid realist review of 56 programs[1]- by the Movement for Community-led Development (MCLD), a global consortium of 1500+ local civil society organizations and international NGOs, clearly demonstrate that in order to be truly community-led, organizations need to rethink the way they conduct monitoring and evaluation. If communities are leading their own development, they need to know how they are doing, what solutions are working and what aren't and how can they improve them. This means that communities and community-based organizations have to be part of all stages of the MEL cycle, right from deciding what programs should be evaluated for, how and by whom to data analysis, validation and decisions on how to use the results. This requires a radical shift in the way we think about evaluations- they should not be an instrument to measure “human worth, motivation or achievement,”[2] but rather one to support learning and continuous improvement. 


Evaluability Assessment For Transformative Evaluation Practice

Tamara Walser

The Purpose Of This Workshop Is To Explore And Engage The Transformative Use And Potential Of Evaluability Assessment (EA). EA Was Developed In The 1970s As A Pre-Evaluation Activity For Determining If A Program Was Ready For Outcome Evaluation, With A Focus On Management As Primary Intended Users. Much Like Evaluation In General, EA Theory And Practice Has Evolved To Address The Complex Needs Of Programs And Their Communities. No Longer Tied Exclusively To Management Decisions About Outcome Evaluation, EA Can Be Used As A Collaborative Evaluation Approach At Any Point In A Program’s Lifecycle. Transforming Our Understanding And Application Of EA Unlocks Its Potential To Engage Program And Organization Communities In Evaluation, Address Program Complexity, Support Culturally Responsive And Equity-Focused Evaluation, And Build Evaluation Capacity. Using Our Four-Component EA Model As A Framework, And Through Examples, Case Scenarios, Small Group Activities, And Discussion; Workshop Participants Will Consider And Apply A Transformative EA Approach.
In Our Book, Evaluability Assessment: Improving Evaluation Quality And Use Across Disciplines (Trevisan & Walser, 2015), We Introduce A Four-Component EA Model That Bridges Historical Conceptions Of EA With Current EA Theory And Use. This Workshop Will Respond To Theme 4, Transforming Evaluation Methods. We Will Provide A Brief Overview Of Current EA Theory And Practice, Including Its Resurgence Across Disciplines And Globally. The Focus Of The Workshop Will Be Implementing Transformative EA Using Our Four-Component Model As A Guiding Framework. We Will Use Examples, Case Scenarios, Small Group Activities, And Discussion To Allow Participants To Engage With The Content And Gain Insight Into How They Can Consider And Apply A Transformative EA Approach In Their Work.


How to Manage Conflict in Evaluation: How to Make Evaluations as Useful as Possible While Respecting Evaluation Integrity

Burt Perrin, Martha McGuire

Evaluators often are “requested” to make changes to their reports for a variety of possible reasons. Under what circumstances is this appropriate, or not? What can evaluators, and evaluation managers do to prevent inappropriate requests for changes, and what one can do if this does occur to reach a satisfactory solution. This session will discuss steps that both internal and external evaluators can be taken to harness and to effectively manage conflict, in such a way that both the integrity and utility of evaluation are respected. More specifically, this session will discuss the following:

  • Under what circumstances may it be appropriate, or not, for evaluators to be given feedback and requested to make changes to their reports?
  • What steps can both internal and external evaluators, as well as evaluation managers, take throughout the evaluation process so that inappropriate pressure to distort evaluation findings and reports do not arise?
  • What steps can be taken to resolve conflicts, including attempts at inappropriate interference with the integrity of an evaluation, that may arise at any stage in the evaluation process, and in particular at the reporting stage?

The session will focus on the practical, so that participants can leave the session with ideas that they can implement. The session will highlight the importance of the art of evaluation, such as in being able to make sometimes difficult choices, which requires evaluators to have competencies in the areas of interpersonal skills, facilitation, and negotiation.


Improved Quality Of Life As A New Evaluation Criterion Of Infrastructural Projects: Theoretical Considerations And Methodologies For Practical Applications

Francesca Ardizzon, Silvia Vignetti, Chiara Pancotti

The workshop aims to explain why there is a need to go beyond economic indicators when assessing infrastructural interventions, especially those with a public service mission, and consider their role in economic well-being and quality of life for a community. Moreover, it provides practical examples and presents the main methodologies that can be used to value improved well-being and quality of life and overcome the lack of available indicators at project level that can ascertain this effect. This two-fold objective is reflected in the two main sessions of which the workshop will be composed, which are further split into two sub-sessions each.
The workshop pursues two main learning objectives. First, it will allow participants to reflect on the importance of considering improved well-being and quality of life as an additional evaluation criterion while assessing the results achieved by infrastructural projects. Once understood the limitations linked to evaluations focusing only on economic indicators, the participants will gain fresh knowledge on the main methodologies that can be used to measure the impact of different types of infrastructural projects on quality of life. This teaching part will build on the evaluations carried out on some large infrastructural projects funded by the EU Cohesion Policy.


Introduction to Communities of Practice (CoP): Considerations for Evaluators, Evaluation, and Transformative Learning

Leah Christina Neubauer, Thomas Archibald

This interactive skill-building workshop will introduce Communities of Practice (CoPs) and demonstrate their application for illuminating CoPs as a methodology to interrogate one’s evaluation practice, the evaluation profession and evaluator roles in society. Increasingly, evaluators are called to evaluate and participate in CoPs in their in-person or virtual global settings. Grounded in critical adult education and transformative learning, this session will focus on CoPs which engage learners in a process of knowledge construction, unlearning/relearning around common interests, ideas, passions, and goals. Participants will develop a CoP framework that includes the three core CoP elements (domain, community, practice), and processes for generating a shared, accessible repertoire of knowledge and resources. The three core elements and framework will provide a larger foundation to discuss monitoring, evaluation, learning (MEL) and evaluative thinking. Co-facilitators will highlight examples of CoP implementation in MEL from across the globe in development, education and community health through lenses of transformation. Participants will engage in a series of hands-on inquiry-oriented techniques, analyzing how CoPs can be operationalized in their evaluation practice.


The Success Case Method in Times of Transformation: An Opportunity for Rapid Impact Evaluation

Daniela Schroeter, Daniela Zahn

The Success Case Method (SCM), typically classified as a method centered approach to evaluation, allows incorporation of theory-driven, utilization-focused, participatory, and transformative ways of thinking into the evaluation of impact while using mixed-methods questionnaires and interviews to identify high and low impact examples with the ultimate goal of informing improvement of processes and transforming organizational impacts of a program. In contrast to traditional impact evaluation methods, such as randomized-controlled trials and strong quasi-experimental designs, the SCM provides a fast and cost-effective way to identify barriers to impact and opportunities for improvement during the development and implementation stages of interventions. The use of the method can increase the quality of interventions and reduce waste in developmental implementations that are awaiting experimental results. Developed for use in the field of training and development (Brinkerhoff, 2003 & 2005), the success case method has potential in broad range of program evaluation contexts (e.g., Coryn, Schröter, & Hansen, 2009; Piggot-Irvine, Aitken, & Marshall, 2009; Olson, Shershneva, & Brownstein, 2011; Medina, Acosta-Perez, Velez, Martinez, Rivera, Sardinas, & Pattatucci, 2015).
The purpose of this hands-on workshop is to introduce the SCM and present opportunities for practicing elements of the method in the workshop. By the end of the workshop, participants will understand the steps involved in applying the SCM, create a theory-driven impact model based on a case, draft question sets suitable for identifying high and low success cases via web-based questionnaires and develop possible interview questions for documenting stories of success and opportunities for improvements. The workshop concludes with a discussion of strengths, limitations, and opportunities associated with using the SCM with its traditional context of training and development as well as alternative program evaluation contexts. Participants will receive materials in advance and are asked to bring their laptops for use in the workshop.
 



Conference Secretariat: C-IN, 5. kvetna 65, 140 21 Prague 4, CZE
Home | Sitemap | eesconference2022@gmail.com | Copyright © 2022 ees2022.eu
Powered and created by E-WORKS - web studio