Category

Evaluation

Case study: Design and Implementation Evaluation of the Cash Plus Care Programme

By | Case Study, Evaluation

At Development Works Changemakers (DWC) we have a passion for social change, and this is seen in our number of case studies and successful evaluations. In April 2019 we started working with the Western Cape Department of Health (WCG: Health) and Desmond Tutu HIV Foundation (DTHF) to provide design and implementation evaluation in the health and social development sector. The project continued until August 2019 and was funded by Global Fund to Fight AIDS, Tuberculosis and Malaria (GF).

Project Outline

The Cash plus Care programme is a component of the Western Cape Department of Health’s (WCG Health) Young Women and Girls Programme (YW&G), funded by the Global Fund (GF). This programme was implemented in two neighbouring sub-districts in Cape Town, Mitchells Plain and Klipfontein.

The aims of the YW&G programme were to: decrease new HIV infections in girls and young women (aged 15-24); decrease teenage pregnancies; keep girls in school until completion of grade 12, and increase economic opportunities for young people through empowerment. The objectives of the intervention were to: enhance the sexual and reproductive health of participants through preventative and promotive health and psychosocial interventions whilst enhancing their meaningful transition to adulthood; and to reduce HIV and STI incidence, unintended pregnancy and gender-based violence amongst Cape Town women in late adolescence.

The programme, also called “Women of Worth” (WoW), provided 12 weekly empowerment sessions and access to youth-friendly healthcare, seeking to address a range of biomedical, socio-behavioural, structural and economic vulnerabilities amongst participating young women. Cash incentives of R300 per session were provided to the young women for participation in empowerment sessions. Implementation of the programme was sub-contracted to the Desmond Tutu HIV Foundation (DTHF) by the Western Cape Department of Health in November 2016, and implementation of the programme began in April 2017.

DWC was contracted by the WCG: Health to conduct an evaluation to assess the Cash plus Care programme’s design and implementation. Specifically, the evaluation focussed on the appropriateness of the programme design, incentives, and recruitment processes; beneficiary and stakeholder satisfaction; and the quality of implementation. The evaluation also identified successes and challenges/barriers and made recommendations to the Global Fund and WCG: Health to inform the design and implementation of future programmes.

Pentecostal church

Image: Pentecostal Upper Hall Church

Project Deliverables

Project deliverables were:

  • A theory of change workshop with all key stakeholders from WCG: Health and the DTHF
  • A theory of change diagram showing all key contextual factors, assumptions, inputs, outputs and outcomes
  • A draft evaluation report
  • A final evaluation report
  • A final report in 1:5:25 format

Our Approach

This evaluation was formative and clarificatory aimed primarily at learning and improvement, but including some elements of early impact assessment (particularly unintended consequences), and summative aspects that informed decision-making on the future of the programme and similar programmes.

The evaluation adopted a mixed-methods evaluation design, which used mostly qualitative data. Existing quantitative data was drawn on where appropriate from the various programme and other documents reviewed.

Both primary and secondary qualitative data were gathered and analysed to answer the evaluation questions. This evaluation, which was essentially a rapid assessment, relied on the design strength produced by a mixed-methods approach which allows for triangulation of method, observers and measures. Given that the programme experienced several changes in its initial design, the developmental approach followed emphasised learning and improvement.

Secondary data was obtained from project documents, while primary data was obtained from the following sources:

  • A Theory of Change workshop with a large range of stakeholders;
  • Key Informant Interviews with WCG Health and DTHF staff (16 interviews);
  • One day site visits to five of the 11 operating empowerment session venues;
  • During the site visit, interviews with the site facilitator, at least one FGD with current participants at the site, and one-on-one interviews with any other participants, where targeted. Empowerment sessions were also observed and key aspects were noted on an observation tool;
  • Telephonic interviews with previous graduates who were “cash-no”;
  • Telephonic interviews with programme dropouts, both “cash-yes” and “cash-no”.

In all, interviews and FGDs involving 73 beneficiaries were conducted.

Phillipi village

Image: Philippi Village

Value

The value of this design and implementation evaluation was firstly that it provided the WCG: Health and the DTHF with an independent examination of the complexities of implementing the Cash Plus Care programme in the Western Cape. A number of challenges had been experienced by the programme implementers since its inception. Many of these related to various delays with contracting the sub-recipient (DTHF) and the subsequent rush to catch up by the sub-recipient on various aspects of the programme. Another key challenge was that this programme was both an intervention (with the above-stated aims) and an academic random control trial, designed to inform the GF of the efficacy of using cash incentives to change behaviour around risky behaviour.

Many of the design and implementation challenges had to do with trying to align the needs of a random control trial with the realities of implementing an empowerment programme with young women living in a complex socio-economic setting. The fact that half of the targeted participants were randomly allocated to the “cash-yes” group, while half were allocated to the “cash-no” control group caused numerous problems.

For a start, those in the control group quickly learnt that they were not receiving cash and many then dropped out. Recruitment of young women for the study was also not effective at the beginning of the programme, which meant it struggled to reach its targeted numbers in time. Only once cash incentives were made available to all participants and a proper community mobilisation team was put in place did the numbers pick up and the programme was able to reach its goal. This evaluation brought to light many of these issues and made recommendations for how to mitigate them in future such programmes.

The delayed commencement of the programme and subsequent rush also resulted in the biometric system which was being used to manage the participation and incentive system not being properly ready. Many glitches were experienced which had to be corrected and mitigated along the way. This evaluation helped to document these problems and the ways in which they were solved.

The evaluation also brought to light the experiences of participants and the value they felt the programme had brought to their lives. It showed some emerging elements of empowerment and behaviour change, as well as new forms of social cohesion forming between attendees. The many recommendations made, based on these findings, were of great value to the programme implementers and funders, who received the evaluation very positively.

Image: Tell Them All International

Image: Tell Them All International

Development Works Changemakers Evaluation

Over the next couple of months, we’ll be showcasing more of our case studies and highlighting the various methods of our approach.

To stay up to date with industry news and happenings, you can sign up to our newsletter and follow us on social media.

Facebook

LinkedIn

Twitter

working as an evaluator

Six highlights of working as an Evaluator – Development Works Changemakers

By | Evaluation

Development Works Changemakers (DWC) is dedicated to creating a community of global changemakers that implement innovative development solutions that address global socio-economic challenges to transform communities. Evaluation is a key part of this process.

Our expert team includes experienced evaluators who love their job. Here are six powerful highlights of working in evaluation, as shared by team members Susannah, Fia, and Jenna.

1.   Working with specialists in a variety of sectors

One of the key aspects is being able to apply our technical evaluation expertise and work across various sectors as we collaborate with sector specialists. This makes for a very enriching experience. You’re always dealing with the new subject matter and looking for solutions that work across various areas and sectors.

It’s empowering to be able to work with economists, sector specialists, and others in the development sector who all have a common goal of looking for solutions that work and improve programming. Including working with multi-disciplinary teams and learning from them, picking up useful skills and knowledge.

evaluationers at a workshop

2.   Constant learning and sharing of knowledge

With evaluation, you’re generally working with a variety of people such as stakeholders, beneficiaries, groups of people, etc. It’s wonderful to have that human interaction and be able to work with people, understand their point of view and position. Being able to work with various people from diverse backgrounds is very rewarding, always trying to understand from their points of view.

There is also the opportunity to pick up new skills and methods, then apply them. Project-based work gives you a clean slate for each project, providing a chance to implement new suitable and relevant methods. The evaluation space is one of constant learning.

group of evaluators

3.   Unique daily challenges and growth

The variety of the role of the evaluator involves working with a wide range of topics and sectors. Every evaluation is different, so this makes it very interesting and puts you in touch with a different range of stakeholders and programs. In the process, you learn a lot about the development sector through your work, growing in your skills with each project.

This also means that you are constantly being challenged as there is no routine. Each project needs continuous adaptation and learning.

4.   Directly engage with people

The opportunity to engage directly with people and understand development issues better, and understand the lives of people better is a highlight. For example, working with people during interviews. It is an immense privilege to be able to learn about the situations of people in different contexts, gaining a broader understanding of the diversity of humanity. This is possibly the most fascinating part.

During this engagement with people during interviews, as a researcher and evaluator, you also gain personally from the interview. You cannot do research and evaluation and not be touched or influenced by the lives of others.

5.   Telling a story through data

There’s a joy that comes from using analytics and data as evidence for making decisions. Using data that’s collected by the program itself, or data collected through the process, to inform decision making is often more effective than relying on what other people have said in the past.

This is so important – following what the evidence is saying and tailor-making decisions and programmatic strategies according to what the data depicts. In a time like now, this is a really important direction to be moving into – data leading the way in decision making. Both qualitative and quantitative analytics can tell a story to inform decision making.

analtyics and statistics

6.   Being a Strategic Changemaker

Being an evaluator gives you an opportunity to make a contribution to the development sector as you’re making recommendations in evaluations for projects, programs, and policies.

In this way, you’re able to support change and improvement. It’s not just about getting the report out, but rather improving projects and programs so that the finances and funding set aside for development can be spent more efficiently and on the type of interventions that will have the desired results. These all contribute to shaping real and positive change.

Having the opportunity to help organizations and staff members improve their programming from a very strategic level is important. Whether that be from an independent evaluation and providing recommendations about what you, as an objective observer, see, or suggestions that you’ve gathered from people you’ve spoken to. All feedback helps shape positive change.

It’s inspiring and motivating to be involved at such a strategic level engaging with diverse stakeholders, including when you’re speaking to higher-level program directors and gaining their feedback on learnings to be incorporated into the program for future planning. Insights from various stakeholders are all valuable to shape, focus, and prioritise.

The inputs of staff members who may be new to the research and evaluation process are also valuable.   We appreciate the chance to help build capacity and support them with setting up M&E frameworks and helping programme staff and decision-makers build a good practice of regularly and methodically collecting and using data for decision making.

For some, it can be seen as an additional responsibility and burden but when it’s positioned in a way that’s for learning and they see the progress towards targets, then it can be very rewarding.

wall of feminist icons

How the COVID-19 Crisis Shows We Need More Feminist Evaluation

By | Evaluation, Gender

Myths about Feminist Evaluation and how the COVID-19 crisis shows we need more feminist evaluations 

There is broad agreement in the evaluation community that evaluators often have to be eclectic. Evaluators need to know the evaluation theory landscape and be aware that some approaches are appropriate in certain contexts and not in others. Evaluators also have to be able to implement a range of evaluation approaches and know that a single approach may not offer everything needed for a specific evaluation.

cartoon about theories

Chris Lysey – Fresh Spectrum

Feminist Evaluation is one such an approach. It is not relevant in all situations and has limitations. However, the potential of feminist evaluation may be much larger than its current use, particularly given that the vast majority of development projects focus on social issues related to vulnerability and marginalisation. For some, the name may be a hurdle. Because of this, Feminist Evaluation is not fully recognised for its flexibility, utility and relevance, and therefore likely to be under-utilised.

Myths about Feminist Evaluation

It seems that Feminist Evaluation may be misunderstood, considering some myths about the approach.

  • Myth: Only women can be feminist evaluators
  • Myth: Feminist Evaluation is only about women’s rights
  • Myth: Feminist Evaluation and gender evaluation is the same

A myth is “a traditional story, especially one concerning the early history of a people or explaining a natural or social phenomenon, and typically involving supernatural beings or events”. Or “A widely held but false belief or idea.”[1]

Feminist evaluations are scarce

Feminist evaluations are not encountered often, and with the exception of some donors, it is extremely rare to find a Terms of Reference that explicitly asks for a Feminist Evaluation.  Considering the strong reactions elicited by the word “feminism”[2], it is no surprise that feminist evaluation is not common. And even when this approach is used, it may be presented under a different name, such as gender evaluation.

In addition to the reluctance of evaluators to Feminist Evaluation studies as such, the dearth of Feminist Evaluation studies may stem from the approach to be regarded as relatively new – although it has in fact existed for a significant period of time. Another factor that could contribute to the low profile of Feminist evaluation, is that discussions on evaluation methods often do not include Feminist Evaluation.[3]

people protesting

Core beliefs

When an evaluator who is committed to the protection of human rights, wants to ensure that the voices of marginalized people are heard, and wants to use Feminist Evaluation, they often need to master the art of diplomacy first. Some propose that evaluators should not introduce Feminist Evaluation by its name, but rather by the core beliefs that underpin the approach, as potential useful lenses to use in an evaluation. The core beliefs of underpinning Feminist Evaluation are often more palatable.

These beliefs are:

1. There should be equity amongst human beings. Equity should not be confused with equality.  Equity is giving everyone what they need to be successful. Equality is treating everyone the same.

“Equality aims to promote fairness, but it can only work if everyone starts from the same place and needs the same help. Equity appears unfair, but it actively moves everyone closer to success by ‘levelling the playing field.’ But not everyone starts at the same place, and not everyone has the same needs.” – Everyday Feminism [4]

2. Inequality (including gender inequality) leads to social injustice.

“Social inequality refers to relational processes in society that have the effect of limiting or harming a group’s social status, social class, and social circle” It can stem from society’s understanding of gender roles, or from social stereotyping. “Social inequalities exist between ethnic or religious groups, classes and countries making the concept of social inequality a global phenomenon”.

Social inequality is linked to economic equality, although the two are not the same. “Social inequality is linked to racial inequality, gender inequality, and wealth inequality.” Social behaviour, including sexist or racist practices, and other forms of discrimination tends to filter down and have an impact on the opportunities that people have access to, and this in turn impacts on the wealth they can generate for themselves. – ScienceDaily[5]

3. Inequalities (including gender-based inequalities) are systematic and structural

“Conceptions of masculinity and femininity, ideas concerning expectations of women and men, internalised judgements of women’s and men’s actions, prescribed rules about proper behaviour of women and men – all of these, and more, encompass the organisation and persistence of gender inequality in social structures. The social and cultural environments, as well as the institutions that structure them and the individuals that operate within and outside these institutions, are engaged in the production and reproduction of gender norms, attitudes and stereotypes. Beliefs that symbolise, legitimate, invoke, guide, induce or help sustain gender inequality are themselves a product of gender inequality.” – European Institute for Gender Equality[6]

power to the people protest

Checking the myths against the core beliefs

: Only women can be feminist evaluators

Feminist Evaluation can be used by evaluators who do not identify as feminists

If the evaluator identifies with one or more of the core beliefs associated with feminist evaluation, the approach can be used, if the evaluator identifies as a feminist, or the evaluation is labelled as Feminist Evaluation, or if the evaluator does not identify as a feminist and the evaluation is not labelled as a Feminist Evaluation.[7] When undertaking a Feminist Evaluation, the evaluator can use one or more of the core beliefs to shape the evaluation. What data is collected, what data sources will be used, and what critical insights and perspectives are required to address the evaluation questions at hand adequately.

Myth: Feminist evaluation is only about women’s rights

Feminist evaluation is about human rights, not only women’s rights

While the essence of Feminist Evaluation theory is to reveal and provide insight in those individual and institutional practices that have devalued, ignored or denied access to women, it also relates to other oppressed and marginalised groups[8], and other forms of inequality. What distinguishes Feminist Evaluation is its focus on the impact of culture, power, privilege, and social justice.[9]

Feminist theories and feminist research

There are a whole range of variations of feminist theories, including “liberal, cultural, radical, Marxist and/or socialist, postmodern (or poststructuralist), and multiracial feminism” (Hopkins and Koss, 2005 in Mertens & Wilson, 2012:179). Each of these focuses on different forms of inequality.

Feminist research is part of the genre of critical theory, and Feminist Evaluation has developed alongside feminist research, which followed a path from “feminist empiricism, to standpoint theory, and finally to postmodern feminism” (Seigart, 2005 in Podems, 2010: 3)[10].

Myth: Feminist evaluation and gender evaluation are the same 

Feminist and gender evaluation are not the same

The gender and development (GAD) approach evolved from Women in Development (WID) and Women and Development (WAD) approaches[11]. Gender approaches started with Women in Development (WID), which emphasises women’s economic contribution but neglects to understand how this approach put additional strain on women. Women and Development (WAD), made connections between women’s position in society and structural changes but failed to challenge male-dominated power structures.

The GAD approach:

  • Focuses on how gender, race, and class and the social construction of their defining characteristics are interconnected.
  • Recognises the differential impact of projects, programmes and interventions on men and women (necessitating the collection of gender-disaggregated data).
  • Encourages data collection that examines inequalities between men, and uses gender as an analytical category.

Feminist Evaluation views women in a way that recognises that different people (including women) experience oppressive conditions differently, as a result of their varied positions in society, resulting from factors such as race, culture, class, and (colonial) history.

The difference in Gender Evaluation and Feminist Evaluation

GENDER EVALUATIONFEMINIST EVALUATION
Maps/records women’s position.Attempts to strategically affect women’s lives as well as the lives of other marginalised persons.
Sees the world in terms of “men” and “women”, and does not recognise differences between women, based on class, culture, ethnicity, language, age, marital status, sexual preference, and other differenced.Acknowledges and values these differences, realising that “women” are a heterogeneous category.
Appears to assume that all women want “what men already have, technically should have, or will access through development interventions”, i.e. that equality with men is the ultimate goal.Allows for the possibility that women may not want what men possess. This will require different criteria, which will generate different questions and will lead to vastly different judgements and recommendations.
Provides written frameworks that guide the evaluator to collect data, but does not include critical feminist ideals in frameworks.Does not provide frameworks that guide the evaluator. Instead, Evaluators are motivated to be reflexive and are not regarded as value-free or neutral. It explores different ways of knowing and listens to multiple voices. The need to give voice to women within different social political and cultural contexts is emphasised, and it advocates for (all) marginalised groups.
Gender approaches are not challenged because of being Western concepts.Responses elicited by the word “feminist” elicits a range of responses, and it may appear that feminist evaluation proposes a biased approach. Others see feminism as a Western concept, and questions if it is appropriate in a non-Western context.

Source: Podems, 2010: 9

Feminist evaluators are advocates for human rights

A core element of feminist evaluation is that it challenges power relations and the systemic embeddedness of discrimination, as well as the recognised and preferred role of the evaluator as an activist, distinguishes feminist evaluation from principles focused evaluation.

The primary role of the evaluator is to include the marginalised, absent, misrepresented and unheard voices. The philosophical assumptions of the transformative evaluation branch, where feminist evaluation is located, form the foundation for inclusive evaluation. The evaluator does not exclude the traditional stakeholders who are usually included in evaluations (e.g. intended users, decision-makers, programme staff, implementation partners, funders and donors), but ensures that data is gathered from an inclusive group of stakeholders and that those who have traditionally been under-represented, or not represented at all, are included.[12] Feminist evaluation, like other approaches that fall under the transformative branch, is a bottom-up approach that makes change part of the evaluation process.[13]

Making Feminist Evaluation practical

All of this may sound rather theoretical, but there are ways to make Feminist Evaluation practical. Feminist evaluation does not provide set frameworks and does not identify specific processes. It does, however, have eight tenets, which provide a useful “thinking framework” for evaluators.

EIGHT TENETS OF FEMINIST EVALUATION[14]

  1. Evaluation is a political activity, in the sense that the evaluator’s personal experiences, perspectives and characteristics come from, and lead to a particular political standpoint.
  2. Knowledge is culturally and socially influenced.
  3. Knowledge is powerful, and serves direct and articulated purposes, as well as indirect and unarticulated purposes.
  4. There are multiple ways of knowing.
  5. Research methods, institutions and practices have been socially constructed.
  6. Gender inequality is just one way in which social injustice manifests, alongside other forms of social injustice, such as discrimination based on race, class and culture, and gender inequality links up with all three other forms of social injustice.
  7. Gender discrimination is both systematic and structural.
  8. Action and advocacy are regarded as appropriate ethical and moral responses from an engaged feminist evaluator.

Feminist Evaluation can be made practical by using Michael Patton’s principles focused evaluation to re-label the tenets by mapping it to Patton’s GUIDE Framework.

PATTON’S GUIDE FRAMEWORK AND PRINCIPLES-FOCUSED EVALUATION

The GUIDE Framework [15] is a set of criteria which can be used to clarify effectiveness principles for evaluation. It is used in Principles-Focused Evaluation (PFE). GUIDE is an acronym and mnemonic specifying the criteria for high-quality principle statements. A high-quality principle:

  • Provides Guidance
  • Is Useful
  • Inspires
  • Supports ongoing Development and adaptation
  • Is Evaluable

Principles Focused Evaluation (PFE) is based on complexity theory and systems thinking. This approach operates from the perspective that principles inform and guide decisions and choices, and maintains that the deeply held values of principles-driven people are expressed through principles that translate values into behaviours. In this approach principles becomes the evaluand, and the evaluation considers whether principles are clear, meaningful, and actionable; if such principles are actually being followed; and whether they are leading to desired results.[16]

black lives matter protestCrystallising FE Tenets into PFE Principles

It is clear from the table below how mapping the Feminist Evaluation tenets to PFE and translating it to PFE principles makes it practical, actionable and usable.

FEMINIST EVALUATION TENETSPFE-FE PRINCIPLES
1.     Evaluation is a political activity, in the sense that the evaluator’s personal experiences, perspectives and characteristics come from, and lead to a particular political standpoint.

 

1. Acknowledge and take into account that evaluation is a political activity; evaluator’s personal experiences, perspectives, and characteristics come from and lead to a particular political stance.
2.     Knowledge is culturally and socially influenced.

 

2. Contextualize evaluation because knowledge is culturally, socially and temporally contingent.
3.     Knowledge is powerful, and serves direct and articulated purposes, as well as indirect and unarticulated purposes.3. Generate and use knowledge as a powerful resource that serves an explicit or implicit purpose.
4.     There are multiple ways of knowing.4. Respect multiple ways of knowing.
5.     Research methods, institutions and practices have been socially constructed.

 

5. Be cognizant that research methods, institutions and practices are social constructs.
6.     Gender inequality is just one way in which social injustice manifests, alongside other forms of social injustice, such as discrimination based on race, class and culture, and gender inequality links up with all three other forms of social injustice.6. Frame gender inequities as one manifestation of social injustice. Discrimination cuts across race, class, and culture and is inextricably linked to all three.

 

7.     Gender discrimination is both systematic and structural.7. Examine how discrimination based on gender is systematic and structural.
8.     Action and advocacy are regarded as appropriate ethical and moral responses from an engaged feminist evaluator.

 

8. Act on opportunities to create, advocate and support change, which are considered to be morally and ethically appropriate responses of an engaged feminist evaluator.

(Source: Podems, 2018)

Strengths and constraints

The potential scope for using feminist evaluation is broader than expected. Its strengths include that it is flexible, fluid, dynamic and evolving because it provides a way of thinking about evaluation, rather than a specific or prescriptive framework. Because of this flexibility, it can also be used in combination with other approaches and methods.

A distinguishing feature and a strength of Feminist Evaluation is that it is transparent and explicit about its views on knowledge. It actively seeks to recognise, and give voice to different social, political and cultural contexts, and shows how these give privilege to some ways of knowing over others, by specifically focusing on women and disempowered groups.

Evaluators who use feminist evaluation follows an inclusive approach, which ensures that inputs are obtained from a wide range of stakeholders. This enhances the reliability, validity and trustworthiness of the evaluation and makes it possible to draw accurate conclusions and make relevant recommendations.

In addition to the misconceptions mentioned in the introduction to this article, it should be noted that apart from the PFE-FE model, limited guidance is available to operationalise the approach.

By Fia van Rensburg

[1] Dictionary.com | Meanings and Definitions of Words at Dictionary.com

[2] Read more on myths about Feminism here: Resources and Opportunities

[3] Feminist Evaluation and Gender Approaches: There’s a Difference? | Journal of MultiDisciplinary Evaluation

[4] Equality Is Not Enough: What the Classroom Has Taught Me About Justice

[5] Social inequality

[6] Structural inequality

[7] Making Feminist Evaluation Practical

[8] Mertens, D.M. and Wilson, A.T. 2012. Program Evaluation Theory and Practice. A Comprehensive Guide. The Guilford Press. New York.

[9].Donna M. Mertens. (2009). Transformative Research and Evaluation. New York: Guilford Press. 402 pages. Reviewed by Jill Anne Cho

[10] Feminist Evaluation and Gender Approaches: There’s a Difference?

[11] Ibid

[12] Inclusive Evaluation: Implications of Transformative Theory for Evaluation

[13] Patton, M.Q. 2011. Developmental Evaluation. Applying Complexity concepts to Enhance Innovation and Use. The Guilford Press. New York.

[14] Podems, 2018

[15] PFE Week: Principles-Focused Evaluation by Michael Quinn Patton

[16] Ibid

evaluation

Evaluation for Transformational Change – Webinar Summary

By | Evaluation, Workshop

Development Works Changemakers joined a webinar on Evaluation for Transformational Change, organised by UNICEF’s Evaluation Office, EVALSDGs and the International Development Evaluation Association (IDEAS).

title screen for webinarThe presenters were Rob van den Berg and Cristina Magro, the President and Secretary-General of IDEAS. The two are the editors of IDEAS recently published book “Evaluation for Transformational Change: Opportunities and Challenges for the Sustainable Development Goals (SDGs)” on which this webinar was based.

The book presents essays (rather than academic articles) written by “learned agents” in both the Global South and Global North. It combines the perspectives and experiences from a variety of contexts.

The essays discuss ideas of what needs to be done by evaluators and the evaluation practice more broadly to progress from the traditional focus on programmes and projects to an increased emphasis on evaluating how transformational changes for the SDGs can be supported and strengthened. Van den Berg and Magro discussed some of the key essays and concepts presented in the book. They then opened the floor for questions and answers.

Dynamic Evaluation  

One key theme discussed was the need for evaluators to move towards dynamic evaluations for transformational change. Evaluators are encouraged to change their focus from the traditional ‘static’ evaluations of the past which look at what has happened and move towards ‘dynamic’ evaluations which deal with the complexities of transformational change.

Examples include the need to shift focus from programmes/projects to strategies and policies. As well as from micro to macro, from country to global and from linearity to complexity. The editors suggested several key practices for dynamic evaluations. Evaluations should be done in “quasi-real-time”. Meaning not only looking at what has happened in the past and what is happening now but considering what the potential is for the future.

The context in which an intervention takes place should be emphasised; understanding it and making it better. There is a need for forming multidisciplinary teams for evaluations; combining an array of expertise and insights beyond evaluation practice in isolation.

Here, they suggest that the involvement of universities in evaluation teams should be promoted. Academics have sociological and community insights and can contribute through background papers and studies. They can also offer more academic and theoretical perspectives which complement the more practical evaluation perspectives.

bookshelvesSystems Thinking  

The editors promote systems thinking and systems evaluations for transformational change. They use the definition of systems being “dynamics units that we distinguish and choose to treat as comprised of interrelated components, in such a way that the functioning of the system, that is, the result of the interactions between the components, is bigger than the sum of its components.”

They suggest that by adopting a systems viewpoint, evaluators are in a better position to encourage learning, take on transformation thinking, and assist in identifying and promoting sustainable transformational changes.

To adequately adopt a systems-thinking approach, the editors highlighted four challenges and opportunities for us to consider:

  1. Evaluators firstly need to become ‘fluent’ in systems thinking in order to appropriately apply systems concepts, approaches and methods in their evaluations.
  2. Evaluators need to be increasingly receptive to systems analytics and the information and evidence it produces, especially those considering future-oriented scenarios that could lead to transformational change.
  3. The type of system will dictate the approach required. As such, while there are various approaches, instruments and methods that systems analytics offers, evaluators must use their discretion in identifying those most relevant to their assignment.
  4. Evaluations should provide insights as to whether interventions are able to overcome barriers that they face, enhancing sustainability.

Learning Loops

While learning and feedback loops are often encouraged in evaluation, the editors assert that learning and feedback loops are a key practice for transformation. Evaluators should not only be asking whether the intervention was implemented correctly and was effective, but whether the problem to solve was looked at in the right way to begin with.

By asking more difficult questions, one can better understand what kind of transformational change should actually be accomplished.  The editors discussed a triple loop of learning as depicted in the figure below.

While the first feedback loop asks what we have learned, the second loop looks at whether the initiative is indeed the right initiative for the problem needing amelioration. And the third loop asks if we have looked at the problem in the right way to begin with.

There should be constant feedback loops the more we gain an understanding of the programme, context, actors, etc. and the actions we take to achieve transformation. We should increasingly look to the future of the programme rather than isolate it to the present. Evaluations need to start looking beyond the intervention in itself, and place them within the system they are supposed to address.

 

graph from webinar

Sustainability

Systems thinking and the triple learning loop together speak to the need for systems to become more sustainable. Evaluators have often considered sustainability, but this has typically been defined by the long-term programme results.

The editors assert the need for a different approach, emphasising the need for sustainability to be redefined “an adaptive and resilient dynamic balance between the social, economic and environmental domains”; where the economic domain no longer exploits the environment (e.g. climate change) and social (e.g. social inequity) domains.

In order to be adaptive and resilient, one needs preparatory systems. Evaluators can play a role in pointing to these and issues that need to be addressed during the course of an evaluation.

The editors assert that sustainability is a system issue – sustainability is achieved when systems become adaptive and balanced over time in the relationship between the three domains of social, economic and environment. If one disregards the social domain, consequences can and have included inequity and inequality and grapples with healthcare, labour conditions, and conflict.

On the other hand, when the environmental domain has been neglected, climate change, a loss of biodiversity and pollution have ensued. The economic domain tends to take precedence due to the common belief that economic growth will resolve societal ailments through creation of jobs and wealth, and environmental damage through the creation of new technologies.

In practice, the editors encourage evaluators to continuously ask broader questions about an intervention and how it interacts with these three domains. They propose three sustainability questions evaluators should consider when starting an evaluation, namely whether the transformation that intervention aims for leads to:

  1. More equity, human rights, peace and social sustainability (social domain)
  2. Strengthening of natural resources, habitat and sustainable eco-services (environmental domain)
  3. Economic development that is equitable and strengthens our habitat (economic domain)

The editors encourage evaluators to use these questions as part of their “toolbox” when looking at transformational initiatives. By going through these questions, it becomes clearer where the programme could improve and where additional knowledge and expertise is required.

graffitti art

Concluding Thoughts

The webinar provided interesting food for thought with regards to contributing to transformational change. Many of the key principles raised are certainly ideal, including dynamic evaluations, learning and feedback loops and considering the future of a programme for sustainability.

In order to contribute to transformational change, we need to promote constant learning, encourage participation from key stakeholders, increasingly expand the evaluation team to be informed by sector experts, and continuously look at potential scenarios, risks and hazards. The application of these principles can be harder in practice; often contractors require specific answers to specific questions, and to go beyond the scope can require additional budget and additional time.

While these ambitions may potentially be larger than what is currently feasible for many evaluation contracts, change often only manifest with radical actions. As evaluators, our thinking should be constantly stimulated, our learnings continuously shared and boundaries should be tested.

Bearing such principles in mind and applying them where feasible, even one step at a time, can hopefully slowly but surely advance transformational thinking in programming and evaluation, and therefore contribute to desired transformational change.

By Jenna Joffe

M&E online learning during COVID

M&E Online Learning Resources

By | Current Affairs, Evaluation

In the thick of COVID-19, the world is practising social distancing, social isolation and in some circumstances, being placed under lockdowns in an effort to flatten the curve and limit the spread of the virus. The crisis has affected everyone in different ways, but for those who have access to technology, it has allowed access to a few of the pleasures of the outside world from home.

This has allowed some to continue working remotely, continue schooling, take up online exercise classes, and keep in touch with friends and family. For some, there’s more opportunity to do those things that were always on the checklist list “if only I had the time.”

One such option is online capacity development. There are numerous virtual learning platforms, including webinars, short courses, and even degrees.

Online learning

As evaluation specialists, we’d like to suggest several resources for those wanting to brush up on their monitoring and evaluation (M&E) knowledge and skills. These resources can be useful for M&E practitioners, development sector workers, government staff and funders working in the evaluation sector.

Some are free while some have a cost, some earn you a certificate and others not. Some are self-paced and others have deadlines; the options vary. Here are a few options to try out:

Online learning resources – for everyone

quote from betterevaluationFor many of us, taking on an online course may still be out of reach given limited time, cost implications and increased responsibilities in the home due to lockdowns. BetterEvaluation is an excellent resource hub for anyone and everyone involved in evaluation work.

The platform is free and contains resources useful and applicable to any NPOs, funders, government, and external evaluators, at any level of an organisation one might be working in; junior to senior.

BetterEvaluation is a one-stop-shop for all evaluation-related queries, insights, and trending topics. The website includes a BetterEvaluation Resource Library, consisting of over 1600 evaluation resources including overviews, guides, examples, tools or toolkits, and discussion papers. The site also includes free training webinars, links to online courses and events, forums, and case studies.

Another great resource offered on the platform is its blogs. These are quick and easy to read and discuss current trends in evaluation and topics. The whole website is geared towards improving evaluation capacity and practice.

In early April, world-renowned evaluator and CEO of BetterEvalution, Patricia Rogers, published a blog on the website communicating how they would be responding to COVID-19, which includes working collaboratively to create, share and support the use of knowledge about evaluation, and endeavouring to curate additional content to address the current context including:

  • Real-time evaluation
  • Evaluation for adaptive management
  • Addressing equity in evaluation
  • Evaluation for accountability and resource allocation
  • Ways of doing evaluation within physical distancing constraints
  • Ways of working effectively online
  • Resources relating to evaluation in the COVID19 pandemic

Time to start learning

As such, BetterEvaluation has its finger on the pulse of the pandemic and how it will affect the evaluation world, and is committed to delivering up-to-date content for any evaluation practitioner to remain informed and adapt to circumstances.

Keep checking the website in the coming weeks to see when this content becomes available.

By Jenna Joffe

road map of ToC

Programme Theory: Theories of Change and Theories of Action

By | Evaluation, Theory

“Theory of Change” (ToC) has certainly become a popular word in the social development sector among funders, nonprofits, government departments, and others. The programme theory is used by those who are increasingly wanting and needing a ToC as an integral part of their programming and interventions for beneficiaries.

As experts in evaluation, we at Development Works Changemakers are often requested by our clients to either assess the rigour of an existing ToC or develop one from scratch. Assessing an existing ToC helps to ensure that a programme is designed to reasonably achieve its intended results (done within the context of a design evaluation).

Underlying principles of a ToC

underlying principles of TOC

Essentially, the ToC should be a road map, or visual representation of what your programme does, what it is supposed to achieve and suggest how to achieve this. The ToC will assess the activities you are running, your intervention and what results you want to get out of it.

The purpose is to get all key stakeholders of a programme to understand what the programme does – especially if it’s a complicated programme and there are many stakeholders.

A ToC is a helpful exercise. It’s the “design” of the programme. It asks important questions such as – is the programme designed to achieve its intended results?

By mapping out the programme, you are able to identify what activities you do and the type of people you reach. You are then able to scrutinize the programme and see if it’s plausible and that the activities will lead to the desired outcomes.

description of a ToC

A ToC is testable. One of the benefits of this aspect is that you can develop indicators which serve as measurements of how your project is doing. It allows a group to assess what the end goal looks like on paper, whether the programme is testable and introduces an element of accountability to funders, stakeholders and beneficiaries.

In addition, a ToC is not a rigid document. Instead, it is a working document that is flexible and should constantly be referred to as you navigate through the programme. This allows the organization to reconsider important assumptions as the programme advances.

When is a TOC developed?

A ToC is helpful in terms of accountability for both funders and stakeholders. It fills in the missing steps about the reasonability of a programme and how it achieves its goals. Breaking down a programme step-by-step helps make sense of the programme to all involved. This also helps with funding.

A ToC is also useful in rechecking assumptions. When things are going wrong, you can go back and look at what is missing or needs to be tweaked in the programme design. It’s helpful in knowing all of your outcomes (long, medium and short term). This helps to identify achievements in short term goals on the way to the long term goals.

Developing a new ToC is often done for clients who have been running programmes for years but do not have a clearly articulated ToC. Other clients request a ToC when on the cusp of launching new programmes.

If a ToC is developed early in a programme’s lifespan (i.e. at the design or pilot stage), it allows one to identify potential risks and curveballs early on, and either put actions in place to mitigate these, or even change the plan to ensure that these barriers are not faced.

How are TOCs Received?

quote on ToCThere is a mixed reception of ToCs in the industry. Those who have it, understand it and see the value with regards to tracking progress and having a single goal. The value lies in making sure that the activities reach particular milestones along the way.

On the other side of the coin, it can be daunting for those who are unfamiliar with the theory. There are so many different words for ToC and it has been adopted and represented differently among various organizations.

Ultimately, it depends largely on the capacity of the organization to develop one and how they need to communicate with their stakeholders (marketing vs. strategic purposes).

The jargon and different wording for a ToC can be confusing for clients and sometimes even evaluators themselves. This is largely because ToC is used interchangeably with, or represented as, a programme theory, log frame, logical framework, logic model, results chain, impact theory and even more!

Adding to the confusion is that there is no single right way to develop or articulate a ToC. If one simply googles the term “theory of change” you will be faced with an array of graphic representations as presented below.

different examples of a TOC

This confusion can also make people look at it with scepticism. Some get nervous about it being a form of testing their work and their services because there’s something a little bit more systematic in place (which can cause resistance).

Overall, practitioners, implementers and stakeholders are increasingly seeing the value in it and coming on board with using and creating ToC.

Deciding on the format

The format depends largely on the unique needs and requirements of the organisation or their funders, but is also very much dependent on the evaluator/individual developing the ToC and what practices they studied or typically employ.

For example, a ToC and ToA do go hand-in-hand. Essentially, a ToA is how you operationalize your ToC. This is one practice that could be followed.

ToC refers to the broader theory which is framed with “if-then” statements when talking about the underlying theory which feeds into your ToA.

All look very different from box and arrow diagrams, to infographics and tabular formats.

Different TOC representations

Let’s look at some of the common ways organisations represent their ToC.

Tabular format

Below is a tabular format known as the “log frame” or “logical framework”. You’ll see here that the process flows from the bottom to the top, showing;

  1. The activities that are undertaken;
  2. What the immediate outputs or deliverables are;
  3. The expected change (i.e. improvement in knowledge, skills, attitudes, and/or behaviours) and finally;
  4. The ultimate goal the programme is contributing to.

This approach offers the benefits of including the indicators (i.e. the way that one will measure whether activities took place or outputs and outcomes were achieved) and means of verification (i.e. the sources one will use to measure the indicators).

A drawback is that this format is very structured and restrictive. It assumes that change happens in a linear fashion which is not always the case. Sometimes there are feedback loops. It also only shows one overarching outcome, while there may be various shorter-term, medium-term and longer-term outcomes that have to occur for goals to be reached.

toc showing goals

Image credit: The Guardian

Logic model

Another format is the “logic model”, which is often used commonly among nonprofits.

It’s important to note that the process has a more natural flow, but is still tabular. This makes the model useful for developing indicators separately.

example of a toc

Image credit: Student Affairs Assessment

Below is a more flexible logic model which is favoured by Development Works Changemakers when developing ToCs for clients. It shows the important inputs, activities, outputs and various outcomes and impacts. The main benefit is that change is not necessarily linear which is demonstrated in this model. The box and arrow diagrams show interrelationships, feedback loops, how one activity will lead to maybe only one outcome rather than all, how one outcome must be achieved over another, etc.

One addition that Development Works Changemakers makes is adding the assumptions of the programme throughout the diagram. These are both assumptions that support the programme achieving its intent (enablers) and those that prevent this (barriers).

Assumptions are extremely important to consider as well as may have a significant impact on the design. For example, an important assumption of offering an afterschool programme is that;

  1. Learners are provided with transport home after school hours;
  2. It is safe to be on school property afterschool hours, etc.

Such assumptions that do not hold, can often explain why a programme may not be working or achieving its outputs or outcomes.

example of a toc

Image credit: ICAI

Here is another example of a simple ToC and an effective way of presenting the results chain:

simple and effective TOC

Image credit: Better Evaluation

Points to consider with TOC

Constructing a ToC mostly depends on how the organization uses a ToC and if they use it at all. Certain formats assume that a programme follows a linear pathway, which isn’t always the case especially for more complex programmes or interlinking programmes. But sometimes outcomes can be complex and require feedback loops, etc.

It’s therefore important to construct your ToC in a non-restrictive manner. You need to understand the complexities of what you are trying to achieve.

Another important factor is how the sector accepts the ToC. It should be used as a learning tool and a way to constantly improve your programme and meet your objectives and outcomes.

Conclusion

The main purpose of a ToC is to put everyone on the same page by mapping out the design in a sensible and reasonable way. All stakeholders and funders can better understand goals; it helps with developing indicators and ensuring accountability; and should be used as a working document for programme improvement.

To find out more about how we can help your organization plan a ToC and create positive change in a powerful way, contact Lindy Briginshaw (lindy@dwchangemakers.com).

Written by Jenna Joffe

Appreciative inquiry value

The Value of Appreciative Inquiry in the Monitoring & Evaluation, Reporting and Learning Space

By | Evaluation, Research, Workshop

The evaluation space can be a tricky one to navigate, especially considering that making evidence-based judgements about the merit or worth of programmes, what works and what does not work, is an integral part of the evaluation.

Development Works Changemakers (DWC) has been providing Monitoring & Evaluation (M&E) support and capacity development to a non-profit organisation working in the basic education space since 2018. This organisation wanted to expand its M&E system to also incorporate reporting and learning.

We recently introduced Appreciative Inquiry (AI) to assist them to build on the positive core of their existing reporting practice and to track and magnify that into an improving reporting practice in 2020, as part of moving from a traditional monitoring and evaluation (M&E) system, to a monitoring, evaluation reporting and learning (MERL) system.

Understanding Appreciative Inquiry

Appreciative Inquiry is a useful and interesting approach to create positive energy regarding reporting, by focusing on what works. The methodology focuses on what works best, but also identifies areas that need attention, or could be improved.

It can’t be used in every circumstance – but it is a great tool that can be very useful in certain situations. DWC has used AI to activate organisational change processes related to MERL (as in the example provided above);  to supplement Theory of Change (ToC) workshops, and to elicit data from different perspectives during evaluation processes.

What is Appreciative Inquiry?

AI is an action-research methodology that enables organisations to co-construct their desired future, and which focuses on the positive qualities of an organization. These positive qualities are leveraged to enhance the organization. AI is founded on 8 key principles, namely:

  1. Constructionist – Understanding a reality that is socially constructed through language and conversations
  2. Simultaneity – Inquiries create an intervention and initiate change
  3. Poetic – Organizations are an endless source of study and learning which constantly shapes the world as we know it
  4. Anticipatory – Using a hopeful image to inspire action
  5. Positive  – Believing that positive questions lead to positive change
  6. Wholeness – Bringing out the best in people and organizations to stimulate creativity and build collective capacity
  7. Enactment – Starting the process of positive change with self as a living model of the future
  8. Free choice – Believing that free choice liberates power and brings about enhanced results

Source of principles: Sideways Thoughts

Using AI in evaluations

AI was developed as an organizational change methodology but has been adapted to be used in evaluations. In the evaluation community, Appreciative Inquiry (AI) is at best not widely accepted, and is sometimes even frowned upon. However, it does offer a different approach that adds a unique value.

What evaluators have been doing for the past few decades is to focus on the judgment aspect of evaluation. What distinguishes evaluation from other applied social research is that it has to make a judgment on the merit or worth of programmes and projects.

Each case is unique and AI is not suitable for use in all evaluations.  Care must be given to the nature of the task at hand, and what other methodologies are being used in conjunction.

It should also be noted that AI is not an evaluation approach, and does not feature as an evaluation theory. It is merely a tool that can be used for data collection and process facilitation.

When does Appreciative Inquiry work?

As mentioned above, AI can work where energy is required to move processes forward. It could also be used in evaluations. AI works well in a context where a project or programme is not working so well. In such situations, project or programme stakeholders may become defensive when evaluators are appointed, as they anticipate negative judgement. The idea that our questions have the power to shape reality may be a frightening thought, but one worth exploring.

This may impede the openness of stakeholders, which makes it difficult to learn from failures or challenges. AI provides a non-threatening environment in which stakeholders can discuss a project without fear of judgement. By starting off with the identification of what works, a safe environment is provided to also discuss what does not work so well.

Understanding the approach

The underlying philosophy for AI is that what we focus our attention on in the social world will grow and develop. If we focus on the positive, the positive will grow and multiply, but if we focus on the negative, that will thrive instead.

This means that if we follow a problem-centred approach, we get stuck in the misfortune of the problem. The more we try to fix it, the more it grows.

Well, let’s be fair – sometimes problem-solving works, but how many problems did development initiatives (mostly based on a problem or deficit analysis) manage to solve over the past 50 or more years?

There are some conflicting opinions that speculate that you can’t just look at the positives – what about the negatives? In many ways, this concern is valid, and in others, it highlights how AI can be misunderstood.

AI does look at the negatives but in a different way so that it doesn’t dominate the conversation. The negatives/challenges get lifted out but in a more constructive way without pulling the energy down.

Steps in the AI process

In the monitoring and evaluation space, AI could be used as a fully-fledged AI process, or part of it could be used. The AI process is described in terms of the 4-D or the 5-D or 5-I models. These models can also be linked up to a planning process, which consists of some elements of the traditional SWOT planning process. SWOT planning looks at strengths, weaknesses, opportunities and threats. The SOAR process considers strengths and opportunities, and works with that, to develop aspirations, and articulate desired results.

Evidence that supports AI

Through a remarkable body of research, neuroscience has established that we affect people either positively or negatively by the way in which we engage with them and the way they perceive us (also as evaluators).

Prominent neuroscientist Evan Gordon (2000) reminds us that the “avoid danger and maximize reward” principle is an over-arching organizing principle in the brain, and translates in the approach-avoid response.

When our brain tags a stimulus as “good,” we engage in the stimulus (approach), and when our brain tags a stimulus as “bad,” we will disengage from it (avoid). Translated into the evaluation space, this means that if our evaluation processes are perceived as threatening by stakeholders, they may well disengage.

We also know that when people are “seen, heard and loved”, the associated surge in brain chemicals enable them to think better and creatively (connecting behaviour, or approach). Conversely, when people feel that they are criticized, judged and dismissed, their brains literally shut down, as they go into flight mode (avoiding behaviour, or disengagement).

The power of AI

There is a wealth of evidence that shows the power of our words. When athletes use positive imaging and words to tap into their potential to perform at their best, we think it is extraordinary. Why then, do we hesitate to use the same approach to propel our projects and organisations to perform at their best?

Can we as evaluators find a way of using generative questions to tap into what works, so that we can learn from it and amplify it?

The power of questions is aptly described by Browne (2008) who pointed out that every question has a direction, and because of the direction of the question it either carries generative or destructive energy.

AI is interested in generative questions – those that “build a bridge” or “turn on a light”. The rationale for AI is that if we pose provocative questions that discover the positive core of a project or programme, we can multiply and magnify what works.

By doing this tracking and fanning, we focus our energy on what works, and this creates the energy for the programme to grow in that positive direction.

Final Thoughts

Essentially AI promises a lot of potential, especially when used appropriately. When you identify what works and amplify it, great changes can be implemented.

AI is underpinned by a relational and conversational approach to human systems. This approach pays attention to the patterns in the system and the expressive relationship between the elements of the system.

Human systems are living systems, and in these systems patterns of belief; communication; action and reaction; sense-making and emotion; are important – these are the things that “give life” to the system.

At DWC, we specialize in a variety of methodologies and creative approaches. We will adjust and customise each approach depending on each organization’s specific needs, expectations and other contextual factors.  To find out more about how we can help your organization to measure, evaluate, shape and create positive change in a powerful way, contact Lindy Briginshaw (lindy@dwchangemakers.com).

By Fia van Rensburg

mobile survey

Using mobile survey technology for data collection

By | Evaluation, Research

Development Works Changemakers always strives to innovate and optimise the use of technology, especially in research processes and evaluation studies.  One way of improving efficiencies and data quality, whilst maximising time in the field is to use mobile survey technology.

Paper-based data collection / paper-based pencil interviewing (PAPI) has been the standard method for decades. However, errors are frequent, printing, transport and storage costs are prohibitive, and the chance of double data entry are higher.

The development of electronic methods

Electronic methods of data collection have been developed in order to merge the process of data collection and data entry. In 2017, more than 90 per cent of the population in Sub-Saharan Africa were covered by 2G networks. But more advanced networks are now beginning to take hold.

South Africa leads the continent in mobile penetration with 153 mobile cellular prescriptions per 100 populations. Use of mobile phones is widespread even in remote areas of rural South Africa.

Computer-Assisted Personal Interviewing (CAPI)

One example is Computer Assisted Personal Interviewing (CAPI) which our team have used as a method of data collection for an evaluation of upgrading of informal settlements programme.

The evaluation aimed to assess the outcome of the upgrading of informal settlements. The extent to which the programme had enhanced the security of tenure, improved healthy and secure living environments, and reduced social and economic exclusions, with the aim of identifying strengths, challenges and lessons for future strategy planning.

A mixed-method summative evaluation design using the Lot Quality Assurance Sampling (LQAS) methodology was used to assess the outcomes of the programme in designated areas of the Western Cape.

The methods of data collection employed were key informant interviews, focus group discussions and a beneficiary survey.

Our researchers employed a CAPI software application that was uploaded onto a mobile phone. The software allows for access to a survey, which takes the mobile phone user through the survey step-by-step. The data was saved and uploaded when the mobile phone was next within network range.

The process and application

With extensive training, the fieldwork research team found the whole process of creating and uploading the survey to be very user-friendly. Typically the application takes the fieldworker through the survey, question-by-question.

Fieldworkers are required to select options in example questions shown above. Including multiple-choice questions, qualitative or open-ended questions and tick-box questions.

Additionally, the application has prompted to ensure the validity of answers captured by the fieldworkers. Thereby forcing them to answer a question or give a valid answer. This avoids skipping questions.

Once in the field, the research team were able to use the phones with ease. Fieldworker Monalisa Guzana shared, “Using mobile survey technology was tricky when we first began, in terms of learning the questions on the phone and how they were formatted, charging the phones every night – we had to get used to these aspects not present with paper-based surveys. Once we had systemised our way of working, the application and tool made our work simple and quick.”

After the fieldworker had completed a survey, the results were uploaded from the phone once within network range.  Fieldworker Tarryn had this to say about capturing data electronically for the first time, “CAPI is the future… Conducting surveys/questionnaires via cellphone simplifies the process of capturing the data. No paper, no fuss. It is convenient and easy to use. It was such a delight to use in the field.”

The benefits

A key feature of mobile survey technology is that the system provides a fieldwork management spreadsheet showing the number of surveys captured by each agent, when they were uploaded and how long the survey implementation took.

This information is vital for an evaluation. It streamlines the process of fieldworker and survey management. It also allows project managers to see where each fieldworker is reporting from and how long each survey takes to complete.

Researcher Paul shared his insight, “CAPI is an innovative tool that not only improves efficiency in the research process but also secures data. Using the software also requires adequate training, an aspect, which should not be undermined.

Nevertheless, with appropriate skill and technical know-how, designing the surveys within the software online and actual execution of the survey and analysis of results thereof will become an exceptionally manageable time and cost-saving.”

data collection

Tips for using CAPI/mobile survey technology:

After using the technology ourselves, we’ve put together a list of tips to help with the effective use of CAPI technology.

1. Maximise on the input/support around the design and functionality of your survey 

Firstly, the initial navigation of learning how to design and create your CAPI survey can be quite daunting. Ensure sufficient time and budget is allocated for this crucial stage.

2. Train your fieldwork staff thoroughly

It is imperative to provide thorough training for fieldworkers who will be collecting data. When fieldworkers are comfortable using a new form of technology before embarking on data collection, it will create fewer problems once they are in the field. Run through your phone survey in the training!

3.     Pilot your survey, analyse the results and give yourself the necessary time to make any adjustments needed

Piloting your tools before entering the field is an essential component of any research process. When using a new form of data collection, it is advisable to give yourself enough time to analyse the results and make necessary adjustments. Practice, practice, pilot!

4.     Regularly check your data as it comes in

The CAPI web console allows you to access and manage data in real-time. This is particularly helpful as it allows you to monitor data as it is coming through. You are able to keep track of progress and identify any problems early-up. Project managers can keep track of fieldworkers and surveys online, in real-time!

5.     Regularly check up on your data bundles to ensure that your surveys are captured

A pay-as-you-go method for data collection with purchasing credit for mobile phones helps to monitor credit usage and ensure surveys are being captured optimally!

Our DWC portfolio highlights our years of experience in data collection and fieldwork. If you need effective data collection for any project, programme, research study or assignment, please do contact the Lindy Briginshaw at info@dwchangemakers.com.

In conclusion, for a detailed comparison between CAPI and PAPI visit Survey CTO’s link here. Survey CTO is a highly reliable mobile data collection platform. Our team of researchers and evaluators have used it often when working in offline settings.

community politics and tips to overcome

10 tips for moving beyond community politics

By | Ethics, Evaluation

Research fieldwork can be daunting and often impossible when the community is not on your side. It is essential as an evaluator conducting research in the field for an evaluation to strive to overcome this obstacle by acknowledging the importance of community buy-in. This helps minimise the chance of community politics.  

To have the community with you – and not against you – is vital and cannot be underestimated. Politics must be negotiated carefully to avoid community objections, apathy or resistance.

Lindy Briginshaw, CEO and Founder of Development Works Changemakers, explains that without community buy-in, your research study or evaluation can be challenged or even derailed. 

community politics“Don’t anticipate obvious success in undertaking your community research or evaluation study. The strength of your work depends largely on partnerships developed between researcher, evaluator and community, as well as cooperation, negotiation and commitment to the research or evaluation project,” she says.

Here are 10 top tips for overcoming challenging community politics when conducting research,  completing surveys, interviewing community members and gathering data for an evaluation in a particular community.

1. Share responsibilities with your client from day one

Bring your client on-board as much as possible. Your client may be well connected to a community and so able to assist to identify community stakeholders, or influencers, who can legitimise the research process. Such stakeholders include local government officials with political office, as well as community leaders, activists and mobilisers. Once the community leaders have been identified, you will need a point of entry into the target communities. 

You must be given adequate channels of access and know the protocols that need to be followed. This can be achieved by obtaining a letter from the relevant officials in positions of power. This way, community politics can be limited, or even avoided, engaging respectfully and communicating extensively with respective community stakeholders to ensure buy-in and access.

2. Conduct a situational assessment with your client

Get to know the community landscape and social dynamics at play and share your experience of this at briefing meetings. Doing so will provide valuable feedback of how your client’s intervention has been received up to the point of evaluation. This will expose a preliminary assessment of the knowledge, attitudes and perception of the intervention. In turn, you can then identify areas of sensitivity to avoid when approaching the community and refine your methods where necessary.

3. Be up-to-speed on community current affairs

Identify a ‘community champion’ – someone who is a leader or is working in the community and who you may regularly contact for information and guidance before reaching out to the community and throughout the intervention. 

Champions are often your first point of contact as a researcher and evaluator. Usually, they have the community intelligence you need to assist you in your work. Open communication and a good relationship with your champion/s are key and this will support your understanding of the community, as well as your safety and security in the field.

4. Set up meetings with the community leaders

Community leaders are elected or appointed representatives of their community and feel responsible for what happens in their sphere of influence in their communities. It is essential therefore to identify yourself and your purpose in the area. 

Inform and communicate respectfully with the community and leaders of your research and evaluation objectives and who your client is. Failing to acknowledge community leaders can pose a serious challenge and limit access and may even derail your efforts entirely. 

5. Follow the proper channels of community awareness to facilitate buy-in

Once you’ve developed and nurtured a relationship with community leaders, they become an important asset for conducting your research in a particular community. They are instrumental in facilitating buy-in because of their position of influence. 

The leaders will make the community aware of the intended research or objectives of the evaluation study and benefits to the community.  Buy-in from the rest of the community is then more likely to be achieved. The community will be aware of your presence. Most importantly, you are secure in knowing that the proper channels have been followed.

6. Step back and take an objective standpoint

After the politics of access have been addressed, it is important to note that broader political issues should not be addressed by you. You should not represent any affiliation nor any political party, view or ideology. Rather, you should approach the community as an objective outsider who represents the research consultancy. Or an evaluation agency contracted by your client. 

You should emphasise that your role is only for data collection, research and evaluation purposes and that you have no authority, nor judgment, on views expressed by community members.

7. Treat community members with the utmost respect

Always obtain consent for participation from community members through the signing of a consent form. This is necessary before you begin. Community members should be treated with dignity and respect and should not be forced to participate in your research.

8. Be aware of political and community sensitivities

It’s essential to be aware of sensitive issues happening in communities and in the country at large. Knowing this can guide you as to how to dress, approach people and how to talk or even conduct your research. 

This becomes even more important if your research explores sensitive socio-political issues. Having such contextual awareness can mitigate the risk of frustrating community participants and it allows you to be politically sensitive.

9. Know when you can push the limits 

If you find that a survey participant is uncomfortable, it is important that you are sensitive to this. Your task is not to cause turmoil or further damage to a situation. In some extreme situations, you are advised to release a participant from the interview who does not want to proceed. It is best practice to then refer the participant to a person or nonprofit support group, or counsellor, who may support them.

10. Show your appreciation

Once you have completed your research, it is important to give thanks and show appreciation for the community’s time and contribution to your work. You never know when research will need to be conducted in the same community again. 

Leaving people with a smile and a feeling that their inputs are valued is crucial.  This respect shows appreciation for the contributions of community members.

Development Works Changemakers conducts independent evaluations and assessments of globally of projects, programmes, development initiatives and communication campaigns. 

We strive to add value to public and private sector partners, funders and development organisations, by providing accurate, insightful and cost-effective solutions to enhance programme performance.  For more info do contact Lindy Briginshaw, CEO or Susannah Clarke-von Witt, Research & Evaluation Director for more information by emailing lindy@dwchangemakers.com or susannah@dwchangemakers.com.

gold standard in evaluation

A new ‘Gold Standard’ in evaluation design

By | Evaluation, Research

The word ‘gold standard’ is a contentious word when speaking about evaluation designs. Often, it refers to randomised control trials (RCTs). These are evaluation designs that replicate the experimental design in physical and biological sciences that help us to make causal claims. Some claim that this is the strongest and most robust evaluation design.

The new gold standard

However, as evaluators who provide evaluation services in the development sector, when asked what the best evaluation design is for a specific intervention off the bat, you’re likely to have the ambiguous response “it depends”. Because interventions don’t work like neat and tidy laboratory experiments. 

Interventions should be bold and innovative and conceptualised to perform a specific function. Although this is great for social and human development it can be challenging for evaluators.

A landscape of interventions

In a landscape of interventions of all shapes and sizes, evaluators are presented with the task of being an educator, advocate, technician, and sometimes even a magician. At the core of an evaluator’s response to an evaluation should be ‘what is the purpose of this evaluation’.

More often than not, commissioners of evaluations are interested in outcomes and impact. But, as a result of various factors, an RCT becomes unfeasible. Factors can include programme design at conceptualisation and the time at which the evaluation is commissioned. It is at this point that evaluators need to do the best they can with what they have.

Let’s use the analogy of travelling from point A to point B. The best possible vehicle is not the Rolls Royce envisioned, but rather a dirt bike.

Some work still needs to be done in shaking off the stigma of not producing an evaluation design using the often elusive ‘gold standard’ in the hope that what will become the ‘gold standard’ of evaluation will be what is fit-for-purpose.

evaluation techniques