Category

Evaluation

aerial view of mining ground

Case study: TIMS M&E Systems Strengthening Assessment

By Case Study, Evaluation

The Development Works Changemakers’ portfolio is expansive, with several case studies in a variety of development niches. Between January and March 2020, we worked for Wits Health Consortium (WHC) to provide an M&E systems review.

Funded by the Global Fund to Fight AIDS, Tuberculosis, and Malaria, we conducted Monitoring and Evaluation systems strengthening assessment on TB in the mining sector across Southern Africa.

Find out more about the project outline, our approach, the deliverables, values and more.

Project Outline

The Global Fund grant “TB in Mining Sector in Southern Africa” (TIMS) is a ten-country regional grant, supporting a programme which aims to help to reduce the burden of Tuberculosis (TB) in the mining sector in Southern Africa. The grant is being implemented in Botswana, Eswatini, Lesotho, Malawi, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe, in selected Districts with a high burden of TB, large mining populations, and/or high levels of artisanal and small-scale mining.

The Wits Health Consortium (WHC) is the Principal Recipient (PR) of the grant, while implementation at the country-level is undertaken through two Sub Recipients (SR), with each SR managing five countries. These SRs work closely with the National TB Programmes in each of the countries in which they work.

The TIMS programme gathers data from a range of healthcare facilities and occupational health centres in participating countries, specifically data relating to TB prevalence among mining Key Populations (KPs). WHC relies on the national health system workers (e.g nurses, District officials) to collect, capture and report on these indicators to the PR, with the SRs acting as a conduit of this data, and playing a key monitoring, verification and support role.

The PR reports to the Global Fund on the results, which are aggregated across all 10 countries. To help manage this data flow and reporting, WHC established data management systems and processes intended to aid them to consolidate all programme data to submit as part of their semi-annual reports to the Global Fund. WHC, however, has experienced a number of issues with these data management systems and processes and it thus contracted DWC, as M&E experts, to conduct a technical review of its systems, processes and tools and advise on appropriate measures to ensure these challenges are overcome.

Project Deliverables

Project deliverables were:

  • Reviewing the systems, processes and tools at the PR level, SR level, and NTP level to identify areas of weakness and make recommendations to improve the systems. An M&E System Review Report and recommendations were produced, as well as two country mission reports after in-country assessment visits. A customised M&E good practice checklist was also developed and provided.
  • Reviewing the progress update report and supporting documents prior to submission to the Global Fund. A report was produced on the review of all data submitted to the GF.
  • Conducting training of M&E staff at PR and SR levels on data quality assurance aligned to PUDR reporting and GF guidelines. A training workshop was conducted, and a workshop report was produced.

miners

Our Approach:

This was a rapid technical review of the TIMS M&E systems, conducted over a 25 day period in early 2020. The project took place in four stages:

Phase I: Rapid Review of Systems, Processes and Tools at PR Level

The following activities and methods were used in Phase I:

  • An inception meeting was held with the M&E team from the WHC, which led to the development of an inception report and work plan.
  • A document review was then conducted of all relevant documentation on the TIMS M&E systems and tools. In excess of 20 documents were reviewed.
  • An M&E checklist, based on criteria for a good M&E system was drawn up by the evaluation team. This checklist was used as a guiding reference and standard for reviewing the systems in Phases I and II. It also informed the development of all interview tools.
  • A physical inspection of the M&E system at a PR level was conducted.
  • The initial meetings, document review and M&E checklist were used to generate structured interview/survey tools, which were used for interviews with key staff and other role-players at a PR level. These interviews were conducted via Skype/WhatsApp in the week of the 20th of January and focussed on key questions around how the PR is experiencing the system, challenges, system gaps and failures, user-friendliness and good practices. In all, six key individuals were interviewed.

Phase II: Rapid Review of Systems, Processes and Tools at SR and NTP Level

Following the review at PR level, DWC undertook two in-country visits to review the systems, processes and tools at SR and National TB Programme (NTP) level. Country visits were conducted in South Africa and Botswana from 27-29 January 2020.

  • DWC undertook a review of key documents at SR and NTP level (e.g. tools, SOPs, quarterly reports) in cases where these were not included in the PR system review. A total of 11 additional documents were reviewed in this phase, including SOPs, and M&E tools and protocols.
  • DWC then undertook a physical inspection of the M&E systems, processes and tools in two of the grant countries. Based on discussions with the PR, South Africa and Botswana were identified, given that these are both more cost-effective and where the two SRs are based. Of the two days spent in each country, one day was spent with the SR learning about the systems and issues faced at the management site, and one day was planned to be spent in the field learning how data is gathered and what the ground-level dynamics and constraints of data collection are.
  • A structured interview/survey was developed based on the learnings from Phase I and the additional SR- and NTP-related document review. The evaluators conducted the interviews with those involved in the data flow at the country level, including SR staff and available NTP staff. In all, 11 individuals were interviewed during country visits.

Phase III: Rapid Review of Draft Progress Update Report and Supporting Documents

In the third stage of the consultancy, DWC undertook a rapid remote review of the draft progress update report (PUDR) and supporting documents ahead of WHC’s submission to the Global Fund.

This review served to assist and ensure that there are no discrepancies between source documents and data, and the final reported results. Where variances were identified, these were documented in a rapid report, and thereafter amended in the PUDR report.

Phase IV: One-day Training Session

The fourth phase of the assignment involved the design and delivery of a one-day training session for PR and SR M&E staff.  The training session was informed by the findings of the systems and PUDR reviews, as well as GF M&E protocols.

Value

This technical review of M&E systems was highly valuable to the WHC, as well as all other role players within TIMS. It allowed for the independent verification and documentation of the many challenges faced in gathering, monitoring, verifying, collating and reporting on TB data from key mining populations in 10 Southern African countries. This was not a normal M&E systems review. The TIMS M&E Unit was well trained and had set up reasonably good systems, but the complexity of managing a 10-country data gathering and management system caused challenges.

One of the main challenges related to the sheer complexity of trying to manage a health systems intervention with 10 different countries in one region. In this case, TIMS was trying to get each country to gather disaggregated data on mining key populations and their families with the data they already gathered in their TB Registers. Although all part of the SADC region, and using WHO TB tools and processes, each country’s health system differs in many respects.

testimonial quoteAlthough each of these 10 countries signed on to the TIMS programme at a high level, the challenge was in getting standardised data gathering, monitoring and reporting tools, processes and systems across all of these countries. At the onset of TIMS, none of the countries was gathering TB data specifically on mining key populations, although they all used the WHO TB Register system, albeit in slightly different formats, or even languages.

The TIMS programme, therefore, worked with each National TB Programme to introduce a sticker system which would allow those being recorded in TB Registers at primary health facilities to also be identified as a miner, ex-miner, family of these two groups, or living in a mining community.

Each month, health staff were supposed to record and report on the details of those marked by the sticker system. These data were then meant to be captured and reported up to measure against certain targets which TIMS had set itself. However, there were many problems with the entire data chain, from the use of the sticker system to the recording of the data at the facility level to the capturing at the District level, and then to the collation of data at country level, and finally, reporting to the GF.

Problems included the fact that the TIMS M&E Unit was understaffed, and there was no way that two full-time staff could build relationships, train staff, verify data and ensure capturing was accurate in all 10 countries. Their capacity was highly stretched. The NTPs also reported their statistics late, which meant that there was little time for the M&E Unit to ensure it was accurate and to enter it correctly into the reporting tools for the GF. The use of MS Excel spreadsheets also caused many problems with data accuracy, which were only later resolved. A new Management Information System was then introduced to improve on this system and iron out the mistakes.

DWC was able to explore the causes of these challenges and complexities in detail, which assisted not only in documenting the TIMS experience but also in the identification of a number of recommendations for how the systems could be improved. The PUDR review also identified a number of discrepancies and mistakes in the data, which WHC could then correct in time to submit to the GF. Finally, the M&E Systems training was highly useful for PR and SR staff in getting them to understand how to address the challenges they faced and improve their systems.

To stay up to date on all of our projects and industry news, you can follow us on Facebook, Twitter and Linkedin.

Dancing Jerusalema: We are inspired by the spirit of the COSUP team in Mamelodi Pretoria

By Community, Evaluation

We watched the COSUP team in Mamelodi dancing Jerusalema. And we left inspired.

In our recent reflection on the highlights of being a researcher and evaluator, we shared how our work gives us the opportunity to get a glimpse of a wide range of real-life situations. Often these situations are defined by conditions that are not optimal. And where real and seemingly insurmountable challenges occur. Often, in these situations, we find the most inspiring glimpses of hope and joy. These moments show the passion and resilience of actors in the development space.

Read more on our newsroom

COSUP Project

We are currently conducting work which has introduced us to the Community Substance Use Programme (COSUP). COSUP is an evidence-based drug harm reduction programme, which is implemented by the City of Tshwane’s health department, through a partnership with the Department of Family Medicine from the University of Pretoria.

This programme is ground-breaking, as the first comprehensive drug harm reduction programme in the country. Programmes that focus on substance use often focus on prevention strategies and demand reduction, or law enforcement aimed at supply reduction. Harm reduction is an essential component of a comprehensive response to drug use. It is embedded in a human-rights approach which departs from traditional punitive approaches. It recognises the humanity, dignity and rights of persons who use drugs.

Read. more about how COSUP gives hope to substance users

Dancing Jerusalema

As part of our engagement with COSUP, we conducted a virtual site visit to one of the COSUP facilities in Mamelodi, Pretoria. Through this site visit, we gained a deeper understanding of where and how this dedicated team provides professional services to drug users in the community. As well as how they do this with the highest level of compassion and care.

A delightful add-on to our data-gathering was a fun video the team shared with us. It’s not directly related to the focus of our data collection, but very relevant in showing us the team spirit and energy of this team, who are doing often difficult work under challenging conditions. We are inspired by the COSUP team.

By Fia van Rensburg

Case study: Assessment of impact of online courses on digital finance services practitioners

By Case Study, Evaluation

From October 2018 to March 2019, Development Works Changemakers completed their assessment of the Impact of Digital Frontiers Institute (DFI) Online Courses on Practitioners in the Digital Financial Services (DFS) Sector.

The project was funded by FSD Africa, Bill & Melinda Gates Foundation, and Omidyar Network, covering a geographic scope of Mozambique, Malawi, Zambia, Rwanda, and Uganda.

DFI assessment

Project Outline

DFI aims to develop the capacity and enhance the professional development of DFS professionals working in the private, public, or development sectors. By closing the DFS capacity gaps currently experienced in developing markets, the organisation, in the long term, aims to accelerate financial inclusion.

To achieve this DFI provides online training and education courses. Consisting of seminars through a built-for-purpose online campus. These primarily focus on foundational DFS knowledge and skills, but also include areas of leadership development and change management.

Additionally, DFI facilitates a network of professionals, or communities of practice (CoP) which include in-country face-to-face meetings of DFI students and DFI-affiliated professionals, as well as the moderation of online meetings through DFI’s built-for-purpose digital network and series of global seminars. DFI’s first full year of courses was in 2016.

In 2017, DFI undertook focus group research to understand the impact one of its foundational training courses, the certificate in digital money (CIDM), was having on alumni and their organisations. Data was collected from Zambia, Rwanda and Uganda. In 2018, Development Works Changemakers (DWC) was commissioned to undertake follow-up data collection.

Unlike the 2017 data collection, DWC’s assessment considered the impact of all DFI’s online training courses and specifically focused on the extent to which DFI funders’ M&E indicators were being achieved.

The primary purpose of this assessment was to

1) assess the extent to which DFI funders’ M&E indicators are being met; and

2) assess the impact DFI training courses have had on participants, their organisations and the industry to date.

In-country visits for primary data was collected from five DFI markets in Sub-Saharan Africa (SSA), namely Mozambique (Maputo), Malawi (Blantyre), Zambia (Lusaka), Rwanda (Kigali), and Uganda (Kampala).

Project Deliverables

  • Development of primary data collection tools including a survey for practitioners, and interview and focus group discussion (FGDs) guides for practitioners, CoP facilitators, line managers and HR managers, and institution representatives.
  • Final reports including 1) an overall executive summary; 2) an overall introduction, method, a summary of secondary survey data (collected by DFI at six and 18-month follow-up) and recommendations; and 3) individual country reports for the five aforementioned countries, reporting the achievement of indicators and the perceived impact of the courses.
  • Infographic per country depicting the number of students trained, number of training attended, number of participants in this study and findings per indicators and in terms of overall impact.

Our Approach

The assessment responded to a select list of key indicators of interest. The extent to which these indicators were being achieved in the five markets was explored by a combination of cross-cutting data sources and data collection instruments.

The combination of data collection sources and tools aided methodological and data triangulation, which further allowed for the verification of data and a more textured, comprehensive account of DFI’s impact. A mixed-method approach was utilised. This incorporated both qualitative and quantitative data collection and analysis methods that were inclusive and complementary.

This approach allowed for data gathering in multiple ways and the team was able to elicit a variety of perspectives on DFI courses and impact.

Secondary data was collected via

1) a document review of previous DFI reports; and

2) analysis of programme monitoring data, namely six-month and 18-month ex-post survey which are administered to CIDM graduates only.

Primary data was collected from evaluation participants from four target groups namely

1) practitioners (who completed DFI training courses/in the process of completion);

2) line managers and/or HR managers (individuals who oversee/manage the practitioners or are involved in recruitment and/or development within their companies);

3) CoP facilitators (individuals who facilitate the in-country CoP meetings); and

4) institution representatives (Individuals who work for key institutions within the DFS sector and could provide broad insight into the DFS market/sector within their country).

Data was collected from these participants using an online survey for the practitioner and line manager/HR managers respectively, and FGD/interview guide for practitioners, and an interview guide for line managers/HR manager, CoP representatives and institution representatives. Surveys were administered online on Survey Monkey (and administered in-person in-country to gather more responses) and participants were incentivized to participate with the offer of discounted courses with DFI.

FGDs and interviews were conducted primarily face-to-face in each of the country capitals. The practitioners were invited via email to a CoP meeting, where DWC team members conducted the FGDs. Practitioners in FGDs and in surveys provided the contact details of their line managers and/or HR managers. DWC followed-up with for interviews and CoP facilitators made themselves readily available. And also assisted in arranging interviews with individuals in major institutions, including government ministries, banks, and interbanks.

The evaluation team analysed both primary and secondary data that was collected using ATLAS.ti for thematic analysis of the qualitative data and Microsoft Excel for descriptive statistical analysis of quantitative data.

Value

The assessment provided a valuable opportunity for DFI to take stock of its achievements since 2016. The assessment provided DFI with insight on the extent to which their set indicators were being met. As well as where gaps exist, areas for improvement or best practices that could possibly be expanded upon in different countries.

It also provided insight into the value of and impact that the course may be having on individuals and in their companies. Based on the findings, several recommendations were made, that if implemented, could improve the DFI courses going forward.

Recommendations focused on improving CoP attendance and morale, amendments to and additional DFI courses, identifying local partnerships to reduce costs and increase reach, course support, and monitoring and evaluation (M&E). M&E recommendations noted the challenges experienced with data collection and made suggestions for follow-up data collection in 2019 and 2020.

DWC also suggested that the report be used as a tool for learning. Not only for DFI’s internal planning but for each country’s alumni and CoPs. Specifically, the initiatives being developed in different countries and the achievements that they had should be shared with alumni and CoP facilitators, who may be able to learn how to implement initiatives themselves. This led to the development of infographics per country. Showing how each country fared against the indicators and what overall impact was reported per country.

The infographics can serve as marketing tools of how participation in a course can add value to one’s personal and career development. And as a learning tool for other countries. Especially in terms of launching their own formalized CoPs.

We’d love to work with you

We hope that by showcasing our case studies with you, that it shares insight into the areas of expertise that we have experience in. If you have any questions about the research, evaluation, monitoring and development industry, feel free to contact us.

Case Study: IREX – Evaluation of Mandela Washington Fellowship Programme (YALI)

By Case Study, Evaluation

. With significant years of experience, we’ve been showcasing some of our case studies in various niches of the development, evaluation and research space.

From December 2018 to June 2019, we provided a final impact evaluation for the International Research and Exchanges Board (IREX). The project was funded by USAID and the geographic scope covered 49 countries in sub-Saharan Africa.

The final impact evaluation of the USAID-funded, Africa-based follow-on activities of the Mandela Washington Fellowship (MWF) Program is focused on leadership development.

Project Outline

The Young African Leaders Initiative was launched in 2010 by President Barack Obama. As a signature effort to invest in the next generation of African leaders. The Fellowship commenced in 2014 as the flagship program of the Young African Leaders Initiative (YALI). It is aimed at empowering young leaders from Africa (between the ages of 25 and 35), building their skills to improve accountability and transparency of government, start and grow businesses, and serve their communities. Consisting of academic coursework, leadership training and networking.

The Fellowship is implemented by international non-profit organisation IREX, as a cohort-based program, with six (6) annual cohorts for each calendar year from 2014 to 2019[1]. The program consists of attending a US-based leadership institute and the Mandela Washington Fellowship Summit. Some Fellows also have the opportunity to participate in a professional development experience in the U.S.

The United States-based activities are funded by the U.S Department of State. Managed separately from the Africa-based activities, which are funded by the United States Agency for International Development (USAID). This evaluation was focused on the Africa-based USAID-funded component only.

Steps in the programme

During the course of their stay in the US, each Fellow is expected to put together a Leadership Development Program  (LDP). They finalise when they complete their Leadership Institute and share online for comment and peer review. The LDPs form part of the USAID-funded component of the program. Over time, it was voluntarily adopted by US-based institutes. LDPs are distributed at pre-departure orientations to connect the US-based and Africa based parts of the programme. And to guide with the implementation of their US-based learning when they return to their home countries.

ghana drone shot

Upon returning to their home countries, Fellows continue to build the skills they have developed during their time in the United States through support from US embassies, the YALI Network, USAID, the Department of State, and affiliated partners[2]. Through these experiences, Mandela Washington Fellows are able to access ongoing professional development and networking opportunities. As well as support for their ideas, businesses, and organizations. Fellows may also apply for their American partners to travel to Africa to continue project-based collaboration through the Reciprocal Exchange Component.

The Africa-based activities are designed to support Fellows as they develop the leadership skills, knowledge, and attitudes necessary to become active and constructive members of society. They may also choose to participate in a number of USAID-supported follow-on activities, including professional practicums, mentorships, Regional and Continental Conferences and conventions, Regional Advisory Boards (RABs), Speaker Travel Grants (STGs), Continued Networking and Learning (CNL) events, and Collaboration Fund Grants (CFGs).

To assist with the implementation of these Africa-based follow-on activities, IREX has collaborated with three regional partners in Southern Africa (The Trust), East Africa (VSO Kenya), and West Africa (WACSI).

Project Deliverables

The purpose of this final impact evaluation of the USAID-funded, Africa-based follow-on activities of the Mandela Washington Fellowship (MWF) program was to determine and portray the emerging results of the program and to inform current and future youth leadership programming.

The deliverable was an impact evaluation report that answered the following main evaluation questions:

  1. What is the impact of follow-on activities on male and female Fellows’ skills; knowledge; and attitudes necessary to become active and constructive members of society; compared to those men and women who did not participate in the follow-on activities?
  2. How has the program impacted practices of male and female Fellows in supporting democratic governance through improving the accountability and transparency of government in Africa?
  3. Has the program helped male and female Fellows to start new businesses? To what extent has participation in the program helped Fellow-led businesses expand and become more productive?
  4. How has the program impacted on male/female Fellows’ identification with, and participation in community challenges/social responsibility?
  5. To what extent is the network for Mandela Washington Fellowship male and female alumni who collaborate on issues of democratic governance, economic productivity and civic engagement a self-sustaining network? How have USAID-funded follow-on activities contributed to this?

In addition, cross-cutting themes that had to be considered included: empowerment of women and other marginalised youth, including the disabled and LGBTQI, to address inequalities and development challenges; increase of youth participation overall, with an emphasis on how these empowered youth can contribute to their countries’ development; and the establishment of significant partnerships with the private sector to leverage resources, increase impact, and enhance sustainability of planned activities.

kenya nightscape

Our Approach

The evaluation adopted a mixed-method approach. Gathering both quantitative and qualitative data from a large sample of Fellows who had participated in and those who had not participated in Africa-based follow-on activities. Quantitative data was gathered through an online survey from 1292 Fellows, 35 percent of the total Fellow population. Qualitative data was gathered through one-on-one interviews. Conducted either face-to-face or via Skype, with Fellows and program staff and partners, or through focus group discussions with Fellows, during country visits to six African countries.

In this way, a wide range of stakeholders was included in the evaluation. Quantitative and qualitative data were cleaned, transcribed, analysed and incorporated into the findings of the evaluation. Both quantitative and qualitative data was also gathered from secondary sources, including literature on leadership in Africa, and a range of sources provided by IREX on the Africa-based follow-on activities. In addition to the main report, five case studies were produced, highlighting specific programme outcomes.

Value

The value of this evaluation was two-fold. It showed that the aims and methods of the Mandela Washington Fellowship, including the Africa-based follow-on activities, are highly relevant and in line with literature and best practice on youth leadership development in Africa. It also contributed to the body of knowledge of (youth) leadership development programmes. Specifically those based on the ethos of a values-based servant and transformational leadership.

The evaluation showed that the MWF is highly relevant in fostering individual, group and community values within young people. So that they can become true leaders in their own sectors and communities. In addition, the evaluation showed how these young people solidify their leadership roles within their own careers and sectors at a crucial time when they are progressing. Thereby becoming more respected and influential in their workplaces and communities. And more active in society.

The Africa-based follow-on activities enabled Fellows to solidify the knowledge and skills gained in the US, to ground and root the US-based learning, and helped Fellows put their new knowledge into practice.  The program has strengthened significantly many of the values that the Social Change Model (SCM) of leadership focuses on. Especially the consciousness of self, congruence commitment, collaboration, and also, common purpose and citizenship. Not only of home countries but also of Africa in general.

The evaluation showed that amongst other gains, experiential learning through participation in follow-on activities promoted innovative thinking, facilitated shifts in attitudes towards gender roles, rights and sexuality, and motivated Fellows to engage in social entrepreneurship.

Client testimonial

“Development Works Changemakers was selected from a competitive pool of applicants. One of the aspects of their proposal which stood out was their focus on applying an inclusivity lens to their approach. As well as their demonstrated understanding of the leadership field. And, more specifically, their knowledge and experience with leadership in the African context, in addition to the participatory methodology proposed. Once the evaluation got underway, the timeline proposed was adhered to, despite some difficulties with timely responses from key informants.

Development Works Changemakers sifted through an enormous amount of program document data. They collected and analyzed information from program participants and stakeholders. And worked collaboratively with us to surface the most useful data points and findings to highlight program impact and challenges. Their research was insightful and grounded. Where possible with relevant outside data sources that triangulated findings or demonstrated the nuances they found were unique to our circumstances.

The finished report highlighted the most important findings for our research questions. It provided as much detail as could be extrapolated from the data available. Particularly blending the quantitative and qualitative data findings into a cohesive narrative. Development Works Changemakers were professional, insightful, thorough, and responsive to feedback.  I would highly recommend them for a range of evaluation and assessment work.”

– Cheryl Schoenberg Deputy Director, Leadership Practice IREX, and Former Chief of Party for the Mandela Washington Fellowship

Development Works Changemakers Evaluation

Over the next couple of months, we’ll be showcasing more of our case studies. Highlighting the various methods of our approach.

Do you want to stay up to date with industry news and happenings? you can sign up to our newsletter and follow us on social media.

Facebook

LinkedIn

Twitter

[1] This evaluation excludes the 2019 cohort.

[2] YALI has also established four Regional Leadership Centres (Ghana, Senegal, South Africa and Kenya), and a number of satellite centres, to offer leadership training programs to young leaders between the ages of 18 and 35. The four RLCs are based at higher-education institutions in their host countries.

Case Study: UNODC – Baseline, endline and impact evaluation of the LULU programme

By Case Study, Evaluation

At Development Works Changemakers, our passion for change can be seen in our several case studies. The Baseline, Endline and Impact Assessment of the Line Up Live Up (LULU) Programme in South Africa began in May 2019 and was recently completed in January 2020.

The client, United Nations Office on Drugs and Crime (UNODC), worked with the Western Cape Government Department of Cultural Affairs and Sports (DCAS) to provide baseline, endline and impact assessment. Focusing on the area of Khayelitsha and Mitchells Plain in the Western Cape, the following findings were recorded.

children in a classroom

Project Outline

The  Line Up Live Up (LULU) programme is a sport-based life skills training curriculum developed to improve youths’ knowledge/awareness, perceptions, attitudes, life skills and behaviours to build resilience to violence, crime and drug use. The programme is designed to be delivered over 10 sessions to male and female youth, between the ages of 13-18 years.

Each session includes interactive sports-based activities, interspersed with reflective debriefing spaces in which life skills are imparted. These sessions are envisaged to lead to various outcomes, which in the long-term include youth engaging less in risk and antisocial behaviours and demonstrating resilient behaviour.

The LULU programme is being piloted in Brazil, Colombia, the Dominican Republic, Kyrgyzstan, Lebanon, Peru, Palestine, Tajikistan, Uganda, Uzbekistan and in 2019, it was piloted in South Africa. In South Africa, the programme is run in cooperation with the Western Cape DCAS as part of its flagship MOD afterschool programme.

In 2019, DWC was commissioned to conduct a baseline, endline and impact assessment of the LULU programme in nine schools in Khayelitsha and Mitchells Plain, two high-crime areas in the Western Cape, South Africa. The purpose was to assess only the short-term outcomes (knowledge and perceptions) and selected medium-term outcomes (attitudes and behaviours) of the LULU programme. The findings of this study are intended to be used for cross-country comparisons, and for informing programme improvements.

Project Deliverables

As part of the assessment, DWC produced:

  • Adjusted data collection tools that were provided by UNODC and adapted to the South African context and made more youth-friendly; these included a baseline/endline survey for youth, a self-reporting survey for youth participating in LULU, and focus group discussion (FGD) guides for youth, coaches, area managers and DCAS management.
  • A literature review focused on the context of crime in South Africa and the Western Cape province specifically, Khayelitsha and Mitchells Plain profile; policy and other approaches to tackling crime in South Africa; and the sports-based life skills approach international and local examples.
  • A baseline report outlining participating learners’ profile (including demographics and experiences of family/home life, school and community) and outcomes of interest prior to launching the LULU programme in schools
  • An endline report comparing baseline data to endline data to assess changes in the outcomes of interest following the completion of the LULU programme; and
  • An impact report, building on the endline report by additional discussing lessons learned and recommendations.

An executive summary report and summary report of the final impact report.

Our Approach

The assessment followed a mixed-method approach, which combined qualitative and quantitative data analysis in order to bring a robust and credible set of findings to the report.

A non-equivalent, multiple group time-series design was employed, whereby data was collected at baseline before the programme commenced and at endline once the programme concluded. Data was collected from learners from 9 schools across Khayelitsha and Mitchells Plain and collected from:

  • Learners who participated in the LULU programme (treatment group)
  • Learners involved in the afterschool MOD programme (control group I); and
  • Learners who do not participate in any afterschool activities (control group II).

While the initial design of this evaluation assumed all LULU learners would have attended all 10 sessions, this was not the case. Due to the high proportion of learners who did not attend all sessions, all learners were still included, but additional analyses were incorporated to assess the extent to which attendance at 1-6 vs 7-10 sessions influenced outcome indicators.

Secondary data was also collected through a literature and programme document review. Primary data was collected using surveys and focus group discussions (FGDs) provided by UNODC, which were adapted by DWC to be more child-friendly, include colloquial language, ensure that all outcomes were adequately measured by adding additional questions and to shorten the surveys to keep learners interested. In terms of primary data collection:

  • Baseline survey data was collected from 724 learners (313 LULU learners; 204 MOD learners; and 207 non-intervention learners);
  • Follow-up endline survey data was collected from 658 learners (262 LULU learners; 195 MOD learners; and 201 non-intervention learners);
  • Endline self-administered survey data was collected from 210 LULU learners; and
  • FGDs were conducted with a) 8-10 learners from five schools, respectively; b) 16 coaches from all nine schools; c) four Area Managers covering the two Metros and d) two DCAS programme management staff.

Ethical approval from a research ethics committee was granted for this evaluation. The programme and the study itself were constrained by a highly limited timeline, which impacted the implementation of the programme. The study period and school timelines forced the programme to be implemented within a five-week period rather than 10-weeks, which limited the programme’s dosage and duration.

The study period also did not allow sufficient time for LULU participants’ learnings to be fully absorbed and advanced. There were also issues with programme fidelity, and most learners did not attend all 10 LULU sessions as required. These issues made it challenging for outcomes, and especially the more medium-term outcomes of attitude and behaviour change difficult to achieve. These challenges were highlighted for consideration for when findings of the study were interpreted.

Data from primary and secondary data collection were analysed using Atlas.ti for thematic analysis for qualitative data, and Microsoft Excel and IBM SPSS for quantitative data to conduct both descriptive and inferential statistics.

Value

The evaluation produced valuable information, including significant lessons learned and recommendations on the LULU programme that may help inform the improvement of the programme going forward in South Africa; lessons and recommendations included the need for key stakeholder buy-in, longer and more intensified coach training; support to coaches and area managers, and the need for psychosocial support for both learners and coaches.

Further, those short-term outcomes that were achieved can provide evidence to potentially support funding and buy-in for the ongoing implementation of the programme in the future. Finally, the data can be used for comparison with the other programme implementation pilot countries, and lessons learned from this assessment can guide programme implementation and the study thereof in these other countries going forward.

Overall, given the difficulties faced, the programme and its implementers/managers should also be commended on the outcomes realised; what was achieved suggests that had the programme been implemented as intended (in terms of dosage, duration and fidelity) and under the right conditions (with full attendance by all learners and enough time for change to manifest within the study period), further outcomes could have been achieved.

Development Works Changemakers Evaluation

Over the next couple of months, we’ll be showcasing more of our case studies and highlighting the various methods of our approach.

To stay up to date with industry news and happenings, you can sign up to our newsletter and follow us on social media.

Facebook

LinkedIn

Twitter

Case study: Design and Implementation Evaluation of the Cash Plus Care Programme

By Case Study, Evaluation

At Development Works Changemakers (DWC) we have a passion for social change, and this is seen in our number of case studies and successful evaluations. In April 2019 we started working with the Western Cape Department of Health (WCG: Health) and Desmond Tutu HIV Foundation (DTHF) to provide design and implementation evaluation in the health and social development sector. The project continued until August 2019 and was funded by Global Fund to Fight AIDS, Tuberculosis and Malaria (GF).

Project Outline

The Cash plus Care programme is a component of the Western Cape Department of Health’s (WCG Health) Young Women and Girls Programme (YW&G), funded by the Global Fund (GF). This programme was implemented in two neighbouring sub-districts in Cape Town, Mitchells Plain and Klipfontein.

The aims of the YW&G programme were to: decrease new HIV infections in girls and young women (aged 15-24); decrease teenage pregnancies; keep girls in school until completion of grade 12, and increase economic opportunities for young people through empowerment. The objectives of the intervention were to: enhance the sexual and reproductive health of participants through preventative and promotive health and psychosocial interventions whilst enhancing their meaningful transition to adulthood; and to reduce HIV and STI incidence, unintended pregnancy and gender-based violence amongst Cape Town women in late adolescence.

The programme, also called “Women of Worth” (WoW), provided 12 weekly empowerment sessions and access to youth-friendly healthcare, seeking to address a range of biomedical, socio-behavioural, structural and economic vulnerabilities amongst participating young women. Cash incentives of R300 per session were provided to the young women for participation in empowerment sessions. Implementation of the programme was sub-contracted to the Desmond Tutu HIV Foundation (DTHF) by the Western Cape Department of Health in November 2016, and implementation of the programme began in April 2017.

DWC was contracted by the WCG: Health to conduct an evaluation to assess the Cash plus Care programme’s design and implementation. Specifically, the evaluation focussed on the appropriateness of the programme design, incentives, and recruitment processes; beneficiary and stakeholder satisfaction; and the quality of implementation. The evaluation also identified successes and challenges/barriers and made recommendations to the Global Fund and WCG: Health to inform the design and implementation of future programmes.

Pentecostal church

Image: Pentecostal Upper Hall Church

Project Deliverables

Project deliverables were:

  • A theory of change workshop with all key stakeholders from WCG: Health and the DTHF
  • A theory of change diagram showing all key contextual factors, assumptions, inputs, outputs and outcomes
  • A draft evaluation report
  • A final evaluation report
  • A final report in 1:5:25 format

Our Approach

This evaluation was formative and clarificatory aimed primarily at learning and improvement, but including some elements of early impact assessment (particularly unintended consequences), and summative aspects that informed decision-making on the future of the programme and similar programmes.

The evaluation adopted a mixed-methods evaluation design, which used mostly qualitative data. Existing quantitative data was drawn on where appropriate from the various programme and other documents reviewed.

Both primary and secondary qualitative data were gathered and analysed to answer the evaluation questions. This evaluation, which was essentially a rapid assessment, relied on the design strength produced by a mixed-methods approach which allows for triangulation of method, observers and measures. Given that the programme experienced several changes in its initial design, the developmental approach followed emphasised learning and improvement.

Secondary data was obtained from project documents, while primary data was obtained from the following sources:

  • A Theory of Change workshop with a large range of stakeholders;
  • Key Informant Interviews with WCG Health and DTHF staff (16 interviews);
  • One day site visits to five of the 11 operating empowerment session venues;
  • During the site visit, interviews with the site facilitator, at least one FGD with current participants at the site, and one-on-one interviews with any other participants, where targeted. Empowerment sessions were also observed and key aspects were noted on an observation tool;
  • Telephonic interviews with previous graduates who were “cash-no”;
  • Telephonic interviews with programme dropouts, both “cash-yes” and “cash-no”.

In all, interviews and FGDs involving 73 beneficiaries were conducted.

Phillipi village

Image: Philippi Village

Value

The value of this design and implementation evaluation was firstly that it provided the WCG: Health and the DTHF with an independent examination of the complexities of implementing the Cash Plus Care programme in the Western Cape. A number of challenges had been experienced by the programme implementers since its inception. Many of these related to various delays with contracting the sub-recipient (DTHF) and the subsequent rush to catch up by the sub-recipient on various aspects of the programme. Another key challenge was that this programme was both an intervention (with the above-stated aims) and an academic random control trial, designed to inform the GF of the efficacy of using cash incentives to change behaviour around risky behaviour.

Many of the design and implementation challenges had to do with trying to align the needs of a random control trial with the realities of implementing an empowerment programme with young women living in a complex socio-economic setting. The fact that half of the targeted participants were randomly allocated to the “cash-yes” group, while half were allocated to the “cash-no” control group caused numerous problems.

For a start, those in the control group quickly learnt that they were not receiving cash and many then dropped out. Recruitment of young women for the study was also not effective at the beginning of the programme, which meant it struggled to reach its targeted numbers in time. Only once cash incentives were made available to all participants and a proper community mobilisation team was put in place did the numbers pick up and the programme was able to reach its goal. This evaluation brought to light many of these issues and made recommendations for how to mitigate them in future such programmes.

The delayed commencement of the programme and subsequent rush also resulted in the biometric system which was being used to manage the participation and incentive system not being properly ready. Many glitches were experienced which had to be corrected and mitigated along the way. This evaluation helped to document these problems and the ways in which they were solved.

The evaluation also brought to light the experiences of participants and the value they felt the programme had brought to their lives. It showed some emerging elements of empowerment and behaviour change, as well as new forms of social cohesion forming between attendees. The many recommendations made, based on these findings, were of great value to the programme implementers and funders, who received the evaluation very positively.

Image: Tell Them All International

Image: Tell Them All International

Development Works Changemakers Evaluation

Over the next couple of months, we’ll be showcasing more of our case studies and highlighting the various methods of our approach.

To stay up to date with industry news and happenings, you can sign up to our newsletter and follow us on social media.

Facebook

LinkedIn

Twitter

working as an evaluator

Six highlights of working as an Evaluator – Development Works Changemakers

By Evaluation

Development Works Changemakers (DWC) is dedicated to creating a community of global changemakers that implement innovative development solutions that address global socio-economic challenges to transform communities. Evaluation is a key part of this process.

Our expert team includes experienced evaluators who love their job. Here are six powerful highlights of working in evaluation, as shared by team members Susannah, Fia, and Jenna.

1.   Working with specialists in a variety of sectors

One of the key aspects is being able to apply our technical evaluation expertise and work across various sectors as we collaborate with sector specialists. This makes for a very enriching experience. You’re always dealing with the new subject matter and looking for solutions that work across various areas and sectors.

It’s empowering to be able to work with economists, sector specialists, and others in the development sector who all have a common goal of looking for solutions that work and improve programming. Including working with multi-disciplinary teams and learning from them, picking up useful skills and knowledge.

evaluationers at a workshop

2.   Constant learning and sharing of knowledge

With evaluation, you’re generally working with a variety of people such as stakeholders, beneficiaries, groups of people, etc. It’s wonderful to have that human interaction and be able to work with people, understand their point of view and position. Being able to work with various people from diverse backgrounds is very rewarding, always trying to understand from their points of view.

There is also the opportunity to pick up new skills and methods, then apply them. Project-based work gives you a clean slate for each project, providing a chance to implement new suitable and relevant methods. The evaluation space is one of constant learning.

group of evaluators

3.   Unique daily challenges and growth

The variety of the role of the evaluator involves working with a wide range of topics and sectors. Every evaluation is different, so this makes it very interesting and puts you in touch with a different range of stakeholders and programs. In the process, you learn a lot about the development sector through your work, growing in your skills with each project.

This also means that you are constantly being challenged as there is no routine. Each project needs continuous adaptation and learning.

4.   Directly engage with people

The opportunity to engage directly with people and understand development issues better, and understand the lives of people better is a highlight. For example, working with people during interviews. It is an immense privilege to be able to learn about the situations of people in different contexts, gaining a broader understanding of the diversity of humanity. This is possibly the most fascinating part.

During this engagement with people during interviews, as a researcher and evaluator, you also gain personally from the interview. You cannot do research and evaluation and not be touched or influenced by the lives of others.

5.   Telling a story through data

There’s a joy that comes from using analytics and data as evidence for making decisions. Using data that’s collected by the program itself, or data collected through the process, to inform decision making is often more effective than relying on what other people have said in the past.

This is so important – following what the evidence is saying and tailor-making decisions and programmatic strategies according to what the data depicts. In a time like now, this is a really important direction to be moving into – data leading the way in decision making. Both qualitative and quantitative analytics can tell a story to inform decision making.

analtyics and statistics

6.   Being a Strategic Changemaker

Being an evaluator gives you an opportunity to make a contribution to the development sector as you’re making recommendations in evaluations for projects, programs, and policies.

In this way, you’re able to support change and improvement. It’s not just about getting the report out, but rather improving projects and programs so that the finances and funding set aside for development can be spent more efficiently and on the type of interventions that will have the desired results. These all contribute to shaping real and positive change.

Having the opportunity to help organizations and staff members improve their programming from a very strategic level is important. Whether that be from an independent evaluation and providing recommendations about what you, as an objective observer, see, or suggestions that you’ve gathered from people you’ve spoken to. All feedback helps shape positive change.

It’s inspiring and motivating to be involved at such a strategic level engaging with diverse stakeholders, including when you’re speaking to higher-level program directors and gaining their feedback on learnings to be incorporated into the program for future planning. Insights from various stakeholders are all valuable to shape, focus, and prioritise.

The inputs of staff members who may be new to the research and evaluation process are also valuable.   We appreciate the chance to help build capacity and support them with setting up M&E frameworks and helping programme staff and decision-makers build a good practice of regularly and methodically collecting and using data for decision making.

For some, it can be seen as an additional responsibility and burden but when it’s positioned in a way that’s for learning and they see the progress towards targets, then it can be very rewarding.

wall of feminist icons

How the COVID-19 Crisis Shows We Need More Feminist Evaluation

By Evaluation, Gender

Myths about Feminist Evaluation and how the COVID-19 crisis shows we need more feminist evaluations 

There is broad agreement in the evaluation community that evaluators often have to be eclectic. Evaluators need to know the evaluation theory landscape and be aware that some approaches are appropriate in certain contexts and not in others. Evaluators also have to be able to implement a range of evaluation approaches and know that a single approach may not offer everything needed for a specific evaluation.

cartoon about theories

Chris Lysey – Fresh Spectrum

Feminist Evaluation is one such an approach. It is not relevant in all situations and has limitations. However, the potential of feminist evaluation may be much larger than its current use, particularly given that the vast majority of development projects focus on social issues related to vulnerability and marginalisation. For some, the name may be a hurdle. Because of this, Feminist Evaluation is not fully recognised for its flexibility, utility and relevance, and therefore likely to be under-utilised.

Myths about Feminist Evaluation

It seems that Feminist Evaluation may be misunderstood, considering some myths about the approach.

  • Myth: Only women can be feminist evaluators
  • Myth: Feminist Evaluation is only about women’s rights
  • Myth: Feminist Evaluation and gender evaluation is the same

A myth is “a traditional story, especially one concerning the early history of a people or explaining a natural or social phenomenon, and typically involving supernatural beings or events”. Or “A widely held but false belief or idea.”[1]

Feminist evaluations are scarce

Feminist evaluations are not encountered often, and with the exception of some donors, it is extremely rare to find a Terms of Reference that explicitly asks for a Feminist Evaluation.  Considering the strong reactions elicited by the word “feminism”[2], it is no surprise that feminist evaluation is not common. And even when this approach is used, it may be presented under a different name, such as gender evaluation.

In addition to the reluctance of evaluators to Feminist Evaluation studies as such, the dearth of Feminist Evaluation studies may stem from the approach to be regarded as relatively new – although it has in fact existed for a significant period of time. Another factor that could contribute to the low profile of Feminist evaluation, is that discussions on evaluation methods often do not include Feminist Evaluation.[3]

people protesting

Core beliefs

When an evaluator who is committed to the protection of human rights, wants to ensure that the voices of marginalized people are heard, and wants to use Feminist Evaluation, they often need to master the art of diplomacy first. Some propose that evaluators should not introduce Feminist Evaluation by its name, but rather by the core beliefs that underpin the approach, as potential useful lenses to use in an evaluation. The core beliefs of underpinning Feminist Evaluation are often more palatable.

These beliefs are:

1. There should be equity amongst human beings. Equity should not be confused with equality.  Equity is giving everyone what they need to be successful. Equality is treating everyone the same.

“Equality aims to promote fairness, but it can only work if everyone starts from the same place and needs the same help. Equity appears unfair, but it actively moves everyone closer to success by ‘levelling the playing field.’ But not everyone starts at the same place, and not everyone has the same needs.” – Everyday Feminism [4]

2. Inequality (including gender inequality) leads to social injustice.

“Social inequality refers to relational processes in society that have the effect of limiting or harming a group’s social status, social class, and social circle” It can stem from society’s understanding of gender roles, or from social stereotyping. “Social inequalities exist between ethnic or religious groups, classes and countries making the concept of social inequality a global phenomenon”.

Social inequality is linked to economic equality, although the two are not the same. “Social inequality is linked to racial inequality, gender inequality, and wealth inequality.” Social behaviour, including sexist or racist practices, and other forms of discrimination tends to filter down and have an impact on the opportunities that people have access to, and this in turn impacts on the wealth they can generate for themselves. – ScienceDaily[5]

3. Inequalities (including gender-based inequalities) are systematic and structural

“Conceptions of masculinity and femininity, ideas concerning expectations of women and men, internalised judgements of women’s and men’s actions, prescribed rules about proper behaviour of women and men – all of these, and more, encompass the organisation and persistence of gender inequality in social structures. The social and cultural environments, as well as the institutions that structure them and the individuals that operate within and outside these institutions, are engaged in the production and reproduction of gender norms, attitudes and stereotypes. Beliefs that symbolise, legitimate, invoke, guide, induce or help sustain gender inequality are themselves a product of gender inequality.” – European Institute for Gender Equality[6]

power to the people protest

Checking the myths against the core beliefs

: Only women can be feminist evaluators

Feminist Evaluation can be used by evaluators who do not identify as feminists

If the evaluator identifies with one or more of the core beliefs associated with feminist evaluation, the approach can be used, if the evaluator identifies as a feminist, or the evaluation is labelled as Feminist Evaluation, or if the evaluator does not identify as a feminist and the evaluation is not labelled as a Feminist Evaluation.[7] When undertaking a Feminist Evaluation, the evaluator can use one or more of the core beliefs to shape the evaluation. What data is collected, what data sources will be used, and what critical insights and perspectives are required to address the evaluation questions at hand adequately.

Myth: Feminist evaluation is only about women’s rights

Feminist evaluation is about human rights, not only women’s rights

While the essence of Feminist Evaluation theory is to reveal and provide insight in those individual and institutional practices that have devalued, ignored or denied access to women, it also relates to other oppressed and marginalised groups[8], and other forms of inequality. What distinguishes Feminist Evaluation is its focus on the impact of culture, power, privilege, and social justice.[9]

Feminist theories and feminist research

There are a whole range of variations of feminist theories, including “liberal, cultural, radical, Marxist and/or socialist, postmodern (or poststructuralist), and multiracial feminism” (Hopkins and Koss, 2005 in Mertens & Wilson, 2012:179). Each of these focuses on different forms of inequality.

Feminist research is part of the genre of critical theory, and Feminist Evaluation has developed alongside feminist research, which followed a path from “feminist empiricism, to standpoint theory, and finally to postmodern feminism” (Seigart, 2005 in Podems, 2010: 3)[10].

Myth: Feminist evaluation and gender evaluation are the same 

Feminist and gender evaluation are not the same

The gender and development (GAD) approach evolved from Women in Development (WID) and Women and Development (WAD) approaches[11]. Gender approaches started with Women in Development (WID), which emphasises women’s economic contribution but neglects to understand how this approach put additional strain on women. Women and Development (WAD), made connections between women’s position in society and structural changes but failed to challenge male-dominated power structures.

The GAD approach:

  • Focuses on how gender, race, and class and the social construction of their defining characteristics are interconnected.
  • Recognises the differential impact of projects, programmes and interventions on men and women (necessitating the collection of gender-disaggregated data).
  • Encourages data collection that examines inequalities between men, and uses gender as an analytical category.

Feminist Evaluation views women in a way that recognises that different people (including women) experience oppressive conditions differently, as a result of their varied positions in society, resulting from factors such as race, culture, class, and (colonial) history.

The difference in Gender Evaluation and Feminist Evaluation

GENDER EVALUATION FEMINIST EVALUATION
Maps/records women’s position. Attempts to strategically affect women’s lives as well as the lives of other marginalised persons.
Sees the world in terms of “men” and “women”, and does not recognise differences between women, based on class, culture, ethnicity, language, age, marital status, sexual preference, and other differenced. Acknowledges and values these differences, realising that “women” are a heterogeneous category.
Appears to assume that all women want “what men already have, technically should have, or will access through development interventions”, i.e. that equality with men is the ultimate goal. Allows for the possibility that women may not want what men possess. This will require different criteria, which will generate different questions and will lead to vastly different judgements and recommendations.
Provides written frameworks that guide the evaluator to collect data, but does not include critical feminist ideals in frameworks. Does not provide frameworks that guide the evaluator. Instead, Evaluators are motivated to be reflexive and are not regarded as value-free or neutral. It explores different ways of knowing and listens to multiple voices. The need to give voice to women within different social political and cultural contexts is emphasised, and it advocates for (all) marginalised groups.
Gender approaches are not challenged because of being Western concepts. Responses elicited by the word “feminist” elicits a range of responses, and it may appear that feminist evaluation proposes a biased approach. Others see feminism as a Western concept, and questions if it is appropriate in a non-Western context.

Source: Podems, 2010: 9

Feminist evaluators are advocates for human rights

A core element of feminist evaluation is that it challenges power relations and the systemic embeddedness of discrimination, as well as the recognised and preferred role of the evaluator as an activist, distinguishes feminist evaluation from principles focused evaluation.

The primary role of the evaluator is to include the marginalised, absent, misrepresented and unheard voices. The philosophical assumptions of the transformative evaluation branch, where feminist evaluation is located, form the foundation for inclusive evaluation. The evaluator does not exclude the traditional stakeholders who are usually included in evaluations (e.g. intended users, decision-makers, programme staff, implementation partners, funders and donors), but ensures that data is gathered from an inclusive group of stakeholders and that those who have traditionally been under-represented, or not represented at all, are included.[12] Feminist evaluation, like other approaches that fall under the transformative branch, is a bottom-up approach that makes change part of the evaluation process.[13]

Making Feminist Evaluation practical

All of this may sound rather theoretical, but there are ways to make Feminist Evaluation practical. Feminist evaluation does not provide set frameworks and does not identify specific processes. It does, however, have eight tenets, which provide a useful “thinking framework” for evaluators.

EIGHT TENETS OF FEMINIST EVALUATION[14]

  1. Evaluation is a political activity, in the sense that the evaluator’s personal experiences, perspectives and characteristics come from, and lead to a particular political standpoint.
  2. Knowledge is culturally and socially influenced.
  3. Knowledge is powerful, and serves direct and articulated purposes, as well as indirect and unarticulated purposes.
  4. There are multiple ways of knowing.
  5. Research methods, institutions and practices have been socially constructed.
  6. Gender inequality is just one way in which social injustice manifests, alongside other forms of social injustice, such as discrimination based on race, class and culture, and gender inequality links up with all three other forms of social injustice.
  7. Gender discrimination is both systematic and structural.
  8. Action and advocacy are regarded as appropriate ethical and moral responses from an engaged feminist evaluator.

Feminist Evaluation can be made practical by using Michael Patton’s principles focused evaluation to re-label the tenets by mapping it to Patton’s GUIDE Framework.

PATTON’S GUIDE FRAMEWORK AND PRINCIPLES-FOCUSED EVALUATION

The GUIDE Framework [15] is a set of criteria which can be used to clarify effectiveness principles for evaluation. It is used in Principles-Focused Evaluation (PFE). GUIDE is an acronym and mnemonic specifying the criteria for high-quality principle statements. A high-quality principle:

  • Provides Guidance
  • Is Useful
  • Inspires
  • Supports ongoing Development and adaptation
  • Is Evaluable

Principles Focused Evaluation (PFE) is based on complexity theory and systems thinking. This approach operates from the perspective that principles inform and guide decisions and choices, and maintains that the deeply held values of principles-driven people are expressed through principles that translate values into behaviours. In this approach principles becomes the evaluand, and the evaluation considers whether principles are clear, meaningful, and actionable; if such principles are actually being followed; and whether they are leading to desired results.[16]

black lives matter protestCrystallising FE Tenets into PFE Principles

It is clear from the table below how mapping the Feminist Evaluation tenets to PFE and translating it to PFE principles makes it practical, actionable and usable.

FEMINIST EVALUATION TENETS PFE-FE PRINCIPLES
1.     Evaluation is a political activity, in the sense that the evaluator’s personal experiences, perspectives and characteristics come from, and lead to a particular political standpoint.

 

1. Acknowledge and take into account that evaluation is a political activity; evaluator’s personal experiences, perspectives, and characteristics come from and lead to a particular political stance.
2.     Knowledge is culturally and socially influenced.

 

2. Contextualize evaluation because knowledge is culturally, socially and temporally contingent.
3.     Knowledge is powerful, and serves direct and articulated purposes, as well as indirect and unarticulated purposes. 3. Generate and use knowledge as a powerful resource that serves an explicit or implicit purpose.
4.     There are multiple ways of knowing. 4. Respect multiple ways of knowing.
5.     Research methods, institutions and practices have been socially constructed.

 

5. Be cognizant that research methods, institutions and practices are social constructs.
6.     Gender inequality is just one way in which social injustice manifests, alongside other forms of social injustice, such as discrimination based on race, class and culture, and gender inequality links up with all three other forms of social injustice. 6. Frame gender inequities as one manifestation of social injustice. Discrimination cuts across race, class, and culture and is inextricably linked to all three.

 

7.     Gender discrimination is both systematic and structural. 7. Examine how discrimination based on gender is systematic and structural.
8.     Action and advocacy are regarded as appropriate ethical and moral responses from an engaged feminist evaluator.

 

8. Act on opportunities to create, advocate and support change, which are considered to be morally and ethically appropriate responses of an engaged feminist evaluator.

(Source: Podems, 2018)

Strengths and constraints

The potential scope for using feminist evaluation is broader than expected. Its strengths include that it is flexible, fluid, dynamic and evolving because it provides a way of thinking about evaluation, rather than a specific or prescriptive framework. Because of this flexibility, it can also be used in combination with other approaches and methods.

A distinguishing feature and a strength of Feminist Evaluation is that it is transparent and explicit about its views on knowledge. It actively seeks to recognise, and give voice to different social, political and cultural contexts, and shows how these give privilege to some ways of knowing over others, by specifically focusing on women and disempowered groups.

Evaluators who use feminist evaluation follows an inclusive approach, which ensures that inputs are obtained from a wide range of stakeholders. This enhances the reliability, validity and trustworthiness of the evaluation and makes it possible to draw accurate conclusions and make relevant recommendations.

In addition to the misconceptions mentioned in the introduction to this article, it should be noted that apart from the PFE-FE model, limited guidance is available to operationalise the approach.

By Fia van Rensburg

[1] Dictionary.com | Meanings and Definitions of Words at Dictionary.com

[2] Read more on myths about Feminism here: Resources and Opportunities

[3] Feminist Evaluation and Gender Approaches: There’s a Difference? | Journal of MultiDisciplinary Evaluation

[4] Equality Is Not Enough: What the Classroom Has Taught Me About Justice

[5] Social inequality

[6] Structural inequality

[7] Making Feminist Evaluation Practical

[8] Mertens, D.M. and Wilson, A.T. 2012. Program Evaluation Theory and Practice. A Comprehensive Guide. The Guilford Press. New York.

[9].Donna M. Mertens. (2009). Transformative Research and Evaluation. New York: Guilford Press. 402 pages. Reviewed by Jill Anne Cho

[10] Feminist Evaluation and Gender Approaches: There’s a Difference?

[11] Ibid

[12] Inclusive Evaluation: Implications of Transformative Theory for Evaluation

[13] Patton, M.Q. 2011. Developmental Evaluation. Applying Complexity concepts to Enhance Innovation and Use. The Guilford Press. New York.

[14] Podems, 2018

[15] PFE Week: Principles-Focused Evaluation by Michael Quinn Patton

[16] Ibid

evaluation

Evaluation for Transformational Change – Webinar Summary

By Evaluation, Workshop

Development Works Changemakers joined a webinar on Evaluation for Transformational Change, organised by UNICEF’s Evaluation Office, EVALSDGs and the International Development Evaluation Association (IDEAS).

title screen for webinarThe presenters were Rob van den Berg and Cristina Magro, the President and Secretary-General of IDEAS. The two are the editors of IDEAS recently published book “Evaluation for Transformational Change: Opportunities and Challenges for the Sustainable Development Goals (SDGs)” on which this webinar was based.

The book presents essays (rather than academic articles) written by “learned agents” in both the Global South and Global North. It combines the perspectives and experiences from a variety of contexts.

The essays discuss ideas of what needs to be done by evaluators and the evaluation practice more broadly to progress from the traditional focus on programmes and projects to an increased emphasis on evaluating how transformational changes for the SDGs can be supported and strengthened. Van den Berg and Magro discussed some of the key essays and concepts presented in the book. They then opened the floor for questions and answers.

Dynamic Evaluation  

One key theme discussed was the need for evaluators to move towards dynamic evaluations for transformational change. Evaluators are encouraged to change their focus from the traditional ‘static’ evaluations of the past which look at what has happened and move towards ‘dynamic’ evaluations which deal with the complexities of transformational change.

Examples include the need to shift focus from programmes/projects to strategies and policies. As well as from micro to macro, from country to global and from linearity to complexity. The editors suggested several key practices for dynamic evaluations. Evaluations should be done in “quasi-real-time”. Meaning not only looking at what has happened in the past and what is happening now but considering what the potential is for the future.

The context in which an intervention takes place should be emphasised; understanding it and making it better. There is a need for forming multidisciplinary teams for evaluations; combining an array of expertise and insights beyond evaluation practice in isolation.

Here, they suggest that the involvement of universities in evaluation teams should be promoted. Academics have sociological and community insights and can contribute through background papers and studies. They can also offer more academic and theoretical perspectives which complement the more practical evaluation perspectives.

bookshelvesSystems Thinking  

The editors promote systems thinking and systems evaluations for transformational change. They use the definition of systems being “dynamics units that we distinguish and choose to treat as comprised of interrelated components, in such a way that the functioning of the system, that is, the result of the interactions between the components, is bigger than the sum of its components.”

They suggest that by adopting a systems viewpoint, evaluators are in a better position to encourage learning, take on transformation thinking, and assist in identifying and promoting sustainable transformational changes.

To adequately adopt a systems-thinking approach, the editors highlighted four challenges and opportunities for us to consider:

  1. Evaluators firstly need to become ‘fluent’ in systems thinking in order to appropriately apply systems concepts, approaches and methods in their evaluations.
  2. Evaluators need to be increasingly receptive to systems analytics and the information and evidence it produces, especially those considering future-oriented scenarios that could lead to transformational change.
  3. The type of system will dictate the approach required. As such, while there are various approaches, instruments and methods that systems analytics offers, evaluators must use their discretion in identifying those most relevant to their assignment.
  4. Evaluations should provide insights as to whether interventions are able to overcome barriers that they face, enhancing sustainability.

Learning Loops

While learning and feedback loops are often encouraged in evaluation, the editors assert that learning and feedback loops are a key practice for transformation. Evaluators should not only be asking whether the intervention was implemented correctly and was effective, but whether the problem to solve was looked at in the right way to begin with.

By asking more difficult questions, one can better understand what kind of transformational change should actually be accomplished.  The editors discussed a triple loop of learning as depicted in the figure below.

While the first feedback loop asks what we have learned, the second loop looks at whether the initiative is indeed the right initiative for the problem needing amelioration. And the third loop asks if we have looked at the problem in the right way to begin with.

There should be constant feedback loops the more we gain an understanding of the programme, context, actors, etc. and the actions we take to achieve transformation. We should increasingly look to the future of the programme rather than isolate it to the present. Evaluations need to start looking beyond the intervention in itself, and place them within the system they are supposed to address.

 

graph from webinar

Sustainability

Systems thinking and the triple learning loop together speak to the need for systems to become more sustainable. Evaluators have often considered sustainability, but this has typically been defined by the long-term programme results.

The editors assert the need for a different approach, emphasising the need for sustainability to be redefined “an adaptive and resilient dynamic balance between the social, economic and environmental domains”; where the economic domain no longer exploits the environment (e.g. climate change) and social (e.g. social inequity) domains.

In order to be adaptive and resilient, one needs preparatory systems. Evaluators can play a role in pointing to these and issues that need to be addressed during the course of an evaluation.

The editors assert that sustainability is a system issue – sustainability is achieved when systems become adaptive and balanced over time in the relationship between the three domains of social, economic and environment. If one disregards the social domain, consequences can and have included inequity and inequality and grapples with healthcare, labour conditions, and conflict.

On the other hand, when the environmental domain has been neglected, climate change, a loss of biodiversity and pollution have ensued. The economic domain tends to take precedence due to the common belief that economic growth will resolve societal ailments through creation of jobs and wealth, and environmental damage through the creation of new technologies.

In practice, the editors encourage evaluators to continuously ask broader questions about an intervention and how it interacts with these three domains. They propose three sustainability questions evaluators should consider when starting an evaluation, namely whether the transformation that intervention aims for leads to:

  1. More equity, human rights, peace and social sustainability (social domain)
  2. Strengthening of natural resources, habitat and sustainable eco-services (environmental domain)
  3. Economic development that is equitable and strengthens our habitat (economic domain)

The editors encourage evaluators to use these questions as part of their “toolbox” when looking at transformational initiatives. By going through these questions, it becomes clearer where the programme could improve and where additional knowledge and expertise is required.

graffitti art

Concluding Thoughts

The webinar provided interesting food for thought with regards to contributing to transformational change. Many of the key principles raised are certainly ideal, including dynamic evaluations, learning and feedback loops and considering the future of a programme for sustainability.

In order to contribute to transformational change, we need to promote constant learning, encourage participation from key stakeholders, increasingly expand the evaluation team to be informed by sector experts, and continuously look at potential scenarios, risks and hazards. The application of these principles can be harder in practice; often contractors require specific answers to specific questions, and to go beyond the scope can require additional budget and additional time.

While these ambitions may potentially be larger than what is currently feasible for many evaluation contracts, change often only manifest with radical actions. As evaluators, our thinking should be constantly stimulated, our learnings continuously shared and boundaries should be tested.

Bearing such principles in mind and applying them where feasible, even one step at a time, can hopefully slowly but surely advance transformational thinking in programming and evaluation, and therefore contribute to desired transformational change.

By Jenna Joffe

M&E online learning during COVID

M&E Online Learning Resources

By Current Affairs, Evaluation

In the thick of COVID-19, the world is practising social distancing, social isolation and in some circumstances, being placed under lockdowns in an effort to flatten the curve and limit the spread of the virus. The crisis has affected everyone in different ways, but for those who have access to technology, it has allowed access to a few of the pleasures of the outside world from home.

This has allowed some to continue working remotely, continue schooling, take up online exercise classes, and keep in touch with friends and family. For some, there’s more opportunity to do those things that were always on the checklist list “if only I had the time.”

One such option is online capacity development. There are numerous virtual learning platforms, including webinars, short courses, and even degrees.

Online learning

As evaluation specialists, we’d like to suggest several resources for those wanting to brush up on their monitoring and evaluation (M&E) knowledge and skills. These resources can be useful for M&E practitioners, development sector workers, government staff and funders working in the evaluation sector.

Some are free while some have a cost, some earn you a certificate and others not. Some are self-paced and others have deadlines; the options vary. Here are a few options to try out:

Online learning resources – for everyone

quote from betterevaluationFor many of us, taking on an online course may still be out of reach given limited time, cost implications and increased responsibilities in the home due to lockdowns. BetterEvaluation is an excellent resource hub for anyone and everyone involved in evaluation work.

The platform is free and contains resources useful and applicable to any NPOs, funders, government, and external evaluators, at any level of an organisation one might be working in; junior to senior.

BetterEvaluation is a one-stop-shop for all evaluation-related queries, insights, and trending topics. The website includes a BetterEvaluation Resource Library, consisting of over 1600 evaluation resources including overviews, guides, examples, tools or toolkits, and discussion papers. The site also includes free training webinars, links to online courses and events, forums, and case studies.

Another great resource offered on the platform is its blogs. These are quick and easy to read and discuss current trends in evaluation and topics. The whole website is geared towards improving evaluation capacity and practice.

In early April, world-renowned evaluator and CEO of BetterEvalution, Patricia Rogers, published a blog on the website communicating how they would be responding to COVID-19, which includes working collaboratively to create, share and support the use of knowledge about evaluation, and endeavouring to curate additional content to address the current context including:

  • Real-time evaluation
  • Evaluation for adaptive management
  • Addressing equity in evaluation
  • Evaluation for accountability and resource allocation
  • Ways of doing evaluation within physical distancing constraints
  • Ways of working effectively online
  • Resources relating to evaluation in the COVID19 pandemic

Time to start learning

As such, BetterEvaluation has its finger on the pulse of the pandemic and how it will affect the evaluation world, and is committed to delivering up-to-date content for any evaluation practitioner to remain informed and adapt to circumstances.

Keep checking the website in the coming weeks to see when this content becomes available.

By Jenna Joffe