Development Works Changemakers joined a webinar on Evaluation for Transformational Change, organised by UNICEF’s Evaluation Office, EVALSDGs and the International Development Evaluation Association (IDEAS).
The presenters were Rob van den Berg and Cristina Magro, the President and Secretary-General of IDEAS. The two are the editors of IDEAS recently published book “Evaluation for Transformational Change: Opportunities and Challenges for the Sustainable Development Goals (SDGs)” on which this webinar was based.
The book presents essays (rather than academic articles) written by “learned agents” in both the Global South and Global North. It combines the perspectives and experiences from a variety of contexts.
The essays discuss ideas of what needs to be done by evaluators and the evaluation practice more broadly to progress from the traditional focus on programmes and projects to an increased emphasis on evaluating how transformational changes for the SDGs can be supported and strengthened. Van den Berg and Magro discussed some of the key essays and concepts presented in the book. They then opened the floor for questions and answers.
One key theme discussed was the need for evaluators to move towards dynamic evaluations for transformational change. Evaluators are encouraged to change their focus from the traditional ‘static’ evaluations of the past which look at what has happened and move towards ‘dynamic’ evaluations which deal with the complexities of transformational change.
Examples include the need to shift focus from programmes/projects to strategies and policies. As well as from micro to macro, from country to global and from linearity to complexity. The editors suggested several key practices for dynamic evaluations. Evaluations should be done in “quasi-real-time”. Meaning not only looking at what has happened in the past and what is happening now but considering what the potential is for the future.
The context in which an intervention takes place should be emphasised; understanding it and making it better. There is a need for forming multidisciplinary teams for evaluations; combining an array of expertise and insights beyond evaluation practice in isolation.
Here, they suggest that the involvement of universities in evaluation teams should be promoted. Academics have sociological and community insights and can contribute through background papers and studies. They can also offer more academic and theoretical perspectives which complement the more practical evaluation perspectives.
The editors promote systems thinking and systems evaluations for transformational change. They use the definition of systems being “dynamics units that we distinguish and choose to treat as comprised of interrelated components, in such a way that the functioning of the system, that is, the result of the interactions between the components, is bigger than the sum of its components.”
They suggest that by adopting a systems viewpoint, evaluators are in a better position to encourage learning, take on transformation thinking, and assist in identifying and promoting sustainable transformational changes.
To adequately adopt a systems-thinking approach, the editors highlighted four challenges and opportunities for us to consider:
- Evaluators firstly need to become ‘fluent’ in systems thinking in order to appropriately apply systems concepts, approaches and methods in their evaluations.
- Evaluators need to be increasingly receptive to systems analytics and the information and evidence it produces, especially those considering future-oriented scenarios that could lead to transformational change.
- The type of system will dictate the approach required. As such, while there are various approaches, instruments and methods that systems analytics offers, evaluators must use their discretion in identifying those most relevant to their assignment.
- Evaluations should provide insights as to whether interventions are able to overcome barriers that they face, enhancing sustainability.
While learning and feedback loops are often encouraged in evaluation, the editors assert that learning and feedback loops are a key practice for transformation. Evaluators should not only be asking whether the intervention was implemented correctly and was effective, but whether the problem to solve was looked at in the right way to begin with.
By asking more difficult questions, one can better understand what kind of transformational change should actually be accomplished. The editors discussed a triple loop of learning as depicted in the figure below.
While the first feedback loop asks what we have learned, the second loop looks at whether the initiative is indeed the right initiative for the problem needing amelioration. And the third loop asks if we have looked at the problem in the right way to begin with.
There should be constant feedback loops the more we gain an understanding of the programme, context, actors, etc. and the actions we take to achieve transformation. We should increasingly look to the future of the programme rather than isolate it to the present. Evaluations need to start looking beyond the intervention in itself, and place them within the system they are supposed to address.
Systems thinking and the triple learning loop together speak to the need for systems to become more sustainable. Evaluators have often considered sustainability, but this has typically been defined by the long-term programme results.
The editors assert the need for a different approach, emphasising the need for sustainability to be redefined “an adaptive and resilient dynamic balance between the social, economic and environmental domains”; where the economic domain no longer exploits the environment (e.g. climate change) and social (e.g. social inequity) domains.
In order to be adaptive and resilient, one needs preparatory systems. Evaluators can play a role in pointing to these and issues that need to be addressed during the course of an evaluation.
The editors assert that sustainability is a system issue – sustainability is achieved when systems become adaptive and balanced over time in the relationship between the three domains of social, economic and environment. If one disregards the social domain, consequences can and have included inequity and inequality and grapples with healthcare, labour conditions, and conflict.
On the other hand, when the environmental domain has been neglected, climate change, a loss of biodiversity and pollution have ensued. The economic domain tends to take precedence due to the common belief that economic growth will resolve societal ailments through creation of jobs and wealth, and environmental damage through the creation of new technologies.
In practice, the editors encourage evaluators to continuously ask broader questions about an intervention and how it interacts with these three domains. They propose three sustainability questions evaluators should consider when starting an evaluation, namely whether the transformation that intervention aims for leads to:
- More equity, human rights, peace and social sustainability (social domain)
- Strengthening of natural resources, habitat and sustainable eco-services (environmental domain)
- Economic development that is equitable and strengthens our habitat (economic domain)
The editors encourage evaluators to use these questions as part of their “toolbox” when looking at transformational initiatives. By going through these questions, it becomes clearer where the programme could improve and where additional knowledge and expertise is required.
The webinar provided interesting food for thought with regards to contributing to transformational change. Many of the key principles raised are certainly ideal, including dynamic evaluations, learning and feedback loops and considering the future of a programme for sustainability.
In order to contribute to transformational change, we need to promote constant learning, encourage participation from key stakeholders, increasingly expand the evaluation team to be informed by sector experts, and continuously look at potential scenarios, risks and hazards. The application of these principles can be harder in practice; often contractors require specific answers to specific questions, and to go beyond the scope can require additional budget and additional time.
While these ambitions may potentially be larger than what is currently feasible for many evaluation contracts, change often only manifest with radical actions. As evaluators, our thinking should be constantly stimulated, our learnings continuously shared and boundaries should be tested.
Bearing such principles in mind and applying them where feasible, even one step at a time, can hopefully slowly but surely advance transformational thinking in programming and evaluation, and therefore contribute to desired transformational change.
By Jenna Joffe