meetings from home

Facilitating Virtual Participatory Workshops

By Workshop

South Africa is entering its third month since Covid-19 reached its people. Despite early action and lockdown efforts, the end of the crisis is still unclear globally. In the evaluation sector, evaluations have either been cancelled, postponed or are continuing by using remote methods only.

Development Works Changemakers itself is continuing with multiple projects using remote methods, engaging in meetings using online platforms and undertaking data collection using telephonic or electronic means.

Evaluation during COVID

Our approach is typically highly consultative and participatory in nature. We closely engage with the client throughout the evaluation process and ensure we gather their input at key stages and on key deliverables. The benefits of taking such an approach include ensuring buy-in and ownership of stakeholders, validating processes and decision making, facilitating skills transfer, promoting learning, and building trust through transparency.

Some key consultative processes incorporated in our evaluations include;

  • Inception meetings at the commencement of an evaluation,
  • Theory of Change (ToC) and Theory of Action (ToA) workshops to clarify the programme logic or as part of design evaluation,
  • Findings validation workshops to share initial evaluation findings and receive stakeholder feedback for incorporation,
  • Final report findings presentations to share the final findings and conclusions,
  • Progress meetings throughout and more.

As such, the move to more remote methods puts the face-to-face engagement to the test.

online meeting room

Working virtually

Many of us are attuned to participating in virtual meetings, given our exposure to these prior to the pandemic and that they run very much like our normal face-to-face meetings. However, something that many of us may not have been exposed to is facilitating a participatory workshop with a client and/or stakeholders.

In probably all circumstances, workshops work best when conducted face-to-face. Workshops are often much longer than meetings, and depending on the subject matter, can be run over a few days. As such, a facilitator needs to be conscious of participants’ body language, signs of fatigue, signs of misunderstanding etc.

To help entrench learning, face-to-face engagement allows for participants to be broken up into groups to work on activities, and for the split groups to present to the larger group. Even tea breaks in between also facilitate networking and sharing of ideas, and for participants to become more comfortable with the group.

As such, facilitating a workshop online requires adaptation, creativity and a new approach.

Good practices for an online workshop

Some good practices one should consider when facilitating an online workshop include:

Familiarise yourself with the platform

Make sure you are familiar with the online platform you are using. If you use a particular platform for the first time when running a workshop, time may be wasted figuring out how to use the tools adequately. Prepare beforehand!

Reiterate house rules

Especially if you are facilitating a large group, you should encourage participants to use the mute button when they are not speaking. If everyone’s microphones are on, multiple noises can become distracting, and this is very likely the case while many of us work from home alongside family, children and pets. If a small group is being facilitated, and participants are not in noisy environments, the mute can be kept off, which can encourage more spontaneous engagement.

Share an agenda

Create and share an agenda ahead of time; this will help participants to focus if they understand how the time will be spent and will help you as a facilitator to keep the session on track. The agenda should include opportunities to engage with participants, as a pure lecture style will lead to participant fatigue.

Be prompt (or early)

Arrive online a few minutes early to test any technical specifications. In cases where the host must be present to allow participants to join, it is even more imperative that the facilitator/host is prompt. Arriving a few minutes early also provides a chance to engage with participants in a more social manner, and set a warm and positive tone for the remainder of the session.

Encourage face-to-face (online)

Encourage participants to use their video. Video helps participants to feel more like they are in the same room and is more conducive to building rapport with one another than a microphone alone. It encourages more interaction and participation as well as accountability – participants are less inclined to multitask and engage in other non-workshop activities.

Being able to see people’s faces allows participants to pick up on non-verbal cues, and for the facilitator to stay on the pulse with how participants are engaging – whether they are growing fatigued and bored, whether they need an energiser or tea break, or whether someone does not understand something.

Check-in with participants

Especially in a time like now, a check-in with participants is a good way to start a workshop. It is helpful to know if there is a low mood that needs picking up, whether certain sensitivities need extra focus, or if some participants may not be eager to participate. It is also a way for participants who don’t know each other to possibly find common ground with one another.

Introduce an icebreaker

In groups where participants do not know one another, an icebreaker is a good starting point. It should be creative, energising or light-hearted, and help participants feel a little more comfortable with one another. When a workshop is aiming for participants to collaborate on a product or deliverable, establishing group rapport and cohesion upfront is key.

Include breaks if needed

If workshops are long, plan regular breaks. It is often more difficult to concentrate at a virtual meeting while only looking at a screen than in real life where there is more visual stimulation and engagement.

Respect is key

Mutual respect must be the norm in virtual meetings for them to be successful. One must be respectful of participants’ time and be fully present. As a facilitator, these attributes should be encouraged upfront and throughout.

meeting applications

Additional tips for online workshops

As part of an evaluation planned earlier in 2020 prior to the pandemic spread in South Africa, Development Works Changemakers had intended a face-to-face ToC and ToA workshop with our client. With the decision to continue with the evaluation using remote methods only, the workshop was to be held online using an online platform.

In this case, we used Microsoft Teams. The workshop went well, with all attending stakeholders inputting on and refining the ToC and ToA. And the final deliverable was approved by stakeholders to be used as a core reference point for the remainder of the evaluation.

In addition to employing some of the points above where applicable, there were a couple of other ways our team adapted to facilitating the remote workshop, and we found these to work well in our circumstances:

Consider access to the internet

Some participants did not have long-term access to working internet. This is not uncommon. According to the General Household Survey (2018) only 10% of South Africans have a home internet connection. The participants were relying on subsidised data, and in cognisance of this, our team agreed to hold an abbreviated version of the workshop.

Core content would initially be covered in a brief session, and we planned for follow-up engagements if the content was not sufficiently covered. The benefit of this approach was that no undue stress or pressure was placed on any participants who were restricted by data usage.

Additionally, participant fatigue was limited due to the shorter nature of the workshop, and the process allowed for participants to have time away from the workshop setting for further reflection.

Share resources beforehand

Our ToC and ToA workshops typically involve an introduction to related terminology and theoretical learning. However, due to the time constraints imposed, our team put some relevant and user-friendly resources together and shared them with participants before the workshop.

This included definitions of terminology related to programme theory, graphic examples of programme theories, and the programme’s draft logic model that would be further refined in the virtual workshop. We also provided a list of questions around the programme to encourage some thoughts and ideas about how best to represent its programme theory.

This “pre-work” encouraged participants to come to the workshop with an understanding of programme theory and to be armed with ideas and contributions; taking full advantage of the limited time. We also ensured that before diving into working on the programme theory together, participants were given the opportunity to ask questions and gain further clarity if needed about the content shared beforehand.

Take advantage of screen share and real-time technology

During the workshop, our team used the screen-share functionality to share workings on Lucidchart, where the ToA and ToC could be amended in real-time for the group of participants to see and comment on. Within the brief workshop, the group was able to revise the ToA. A ToC was then constructed by the lead evaluator based on the ToA developed. The ToA and ToC were then shared with the group a few days after the workshop for their further feedback.

This process worked well, as it gave participants time to reflect further and allowed them to make suggestions that would not necessarily have come to mind in a half-day workshop. Participants shared their feedback with the lead evaluator and where appropriate, feedback was incorporated.

This iterative process enhanced the quality of the input and buy-in of the ToA and ToC. It was found that no further workshop sessions were required, and participants were satisfied with the ToC and ToA produced from the single session and feedback.

The methodology undertaken here may not necessarily be applicable or appropriate for all workshops or evaluations. However, we found that this process was successful in our circumstance. We made efforts to meet the needs of the stakeholders, adapt to the limitations imposed, and at the same time ensure that participants had a voice and were heard.

By Jenna Joffe

For more information and tips on how to facilitate virtual meetings and workshops, see resources below:


Evaluation for Transformational Change – Webinar Summary

By Evaluation, Workshop

Development Works Changemakers joined a webinar on Evaluation for Transformational Change, organised by UNICEF’s Evaluation Office, EVALSDGs and the International Development Evaluation Association (IDEAS).

title screen for webinarThe presenters were Rob van den Berg and Cristina Magro, the President and Secretary-General of IDEAS. The two are the editors of IDEAS recently published book “Evaluation for Transformational Change: Opportunities and Challenges for the Sustainable Development Goals (SDGs)” on which this webinar was based.

The book presents essays (rather than academic articles) written by “learned agents” in both the Global South and Global North. It combines the perspectives and experiences from a variety of contexts.

The essays discuss ideas of what needs to be done by evaluators and the evaluation practice more broadly to progress from the traditional focus on programmes and projects to an increased emphasis on evaluating how transformational changes for the SDGs can be supported and strengthened. Van den Berg and Magro discussed some of the key essays and concepts presented in the book. They then opened the floor for questions and answers.

Dynamic Evaluation  

One key theme discussed was the need for evaluators to move towards dynamic evaluations for transformational change. Evaluators are encouraged to change their focus from the traditional ‘static’ evaluations of the past which look at what has happened and move towards ‘dynamic’ evaluations which deal with the complexities of transformational change.

Examples include the need to shift focus from programmes/projects to strategies and policies. As well as from micro to macro, from country to global and from linearity to complexity. The editors suggested several key practices for dynamic evaluations. Evaluations should be done in “quasi-real-time”. Meaning not only looking at what has happened in the past and what is happening now but considering what the potential is for the future.

The context in which an intervention takes place should be emphasised; understanding it and making it better. There is a need for forming multidisciplinary teams for evaluations; combining an array of expertise and insights beyond evaluation practice in isolation.

Here, they suggest that the involvement of universities in evaluation teams should be promoted. Academics have sociological and community insights and can contribute through background papers and studies. They can also offer more academic and theoretical perspectives which complement the more practical evaluation perspectives.

bookshelvesSystems Thinking  

The editors promote systems thinking and systems evaluations for transformational change. They use the definition of systems being “dynamics units that we distinguish and choose to treat as comprised of interrelated components, in such a way that the functioning of the system, that is, the result of the interactions between the components, is bigger than the sum of its components.”

They suggest that by adopting a systems viewpoint, evaluators are in a better position to encourage learning, take on transformation thinking, and assist in identifying and promoting sustainable transformational changes.

To adequately adopt a systems-thinking approach, the editors highlighted four challenges and opportunities for us to consider:

  1. Evaluators firstly need to become ‘fluent’ in systems thinking in order to appropriately apply systems concepts, approaches and methods in their evaluations.
  2. Evaluators need to be increasingly receptive to systems analytics and the information and evidence it produces, especially those considering future-oriented scenarios that could lead to transformational change.
  3. The type of system will dictate the approach required. As such, while there are various approaches, instruments and methods that systems analytics offers, evaluators must use their discretion in identifying those most relevant to their assignment.
  4. Evaluations should provide insights as to whether interventions are able to overcome barriers that they face, enhancing sustainability.

Learning Loops

While learning and feedback loops are often encouraged in evaluation, the editors assert that learning and feedback loops are a key practice for transformation. Evaluators should not only be asking whether the intervention was implemented correctly and was effective, but whether the problem to solve was looked at in the right way to begin with.

By asking more difficult questions, one can better understand what kind of transformational change should actually be accomplished.  The editors discussed a triple loop of learning as depicted in the figure below.

While the first feedback loop asks what we have learned, the second loop looks at whether the initiative is indeed the right initiative for the problem needing amelioration. And the third loop asks if we have looked at the problem in the right way to begin with.

There should be constant feedback loops the more we gain an understanding of the programme, context, actors, etc. and the actions we take to achieve transformation. We should increasingly look to the future of the programme rather than isolate it to the present. Evaluations need to start looking beyond the intervention in itself, and place them within the system they are supposed to address.


graph from webinar


Systems thinking and the triple learning loop together speak to the need for systems to become more sustainable. Evaluators have often considered sustainability, but this has typically been defined by the long-term programme results.

The editors assert the need for a different approach, emphasising the need for sustainability to be redefined “an adaptive and resilient dynamic balance between the social, economic and environmental domains”; where the economic domain no longer exploits the environment (e.g. climate change) and social (e.g. social inequity) domains.

In order to be adaptive and resilient, one needs preparatory systems. Evaluators can play a role in pointing to these and issues that need to be addressed during the course of an evaluation.

The editors assert that sustainability is a system issue – sustainability is achieved when systems become adaptive and balanced over time in the relationship between the three domains of social, economic and environment. If one disregards the social domain, consequences can and have included inequity and inequality and grapples with healthcare, labour conditions, and conflict.

On the other hand, when the environmental domain has been neglected, climate change, a loss of biodiversity and pollution have ensued. The economic domain tends to take precedence due to the common belief that economic growth will resolve societal ailments through creation of jobs and wealth, and environmental damage through the creation of new technologies.

In practice, the editors encourage evaluators to continuously ask broader questions about an intervention and how it interacts with these three domains. They propose three sustainability questions evaluators should consider when starting an evaluation, namely whether the transformation that intervention aims for leads to:

  1. More equity, human rights, peace and social sustainability (social domain)
  2. Strengthening of natural resources, habitat and sustainable eco-services (environmental domain)
  3. Economic development that is equitable and strengthens our habitat (economic domain)

The editors encourage evaluators to use these questions as part of their “toolbox” when looking at transformational initiatives. By going through these questions, it becomes clearer where the programme could improve and where additional knowledge and expertise is required.

graffitti art

Concluding Thoughts

The webinar provided interesting food for thought with regards to contributing to transformational change. Many of the key principles raised are certainly ideal, including dynamic evaluations, learning and feedback loops and considering the future of a programme for sustainability.

In order to contribute to transformational change, we need to promote constant learning, encourage participation from key stakeholders, increasingly expand the evaluation team to be informed by sector experts, and continuously look at potential scenarios, risks and hazards. The application of these principles can be harder in practice; often contractors require specific answers to specific questions, and to go beyond the scope can require additional budget and additional time.

While these ambitions may potentially be larger than what is currently feasible for many evaluation contracts, change often only manifest with radical actions. As evaluators, our thinking should be constantly stimulated, our learnings continuously shared and boundaries should be tested.

Bearing such principles in mind and applying them where feasible, even one step at a time, can hopefully slowly but surely advance transformational thinking in programming and evaluation, and therefore contribute to desired transformational change.

By Jenna Joffe

Appreciative inquiry value

The Value of Appreciative Inquiry in the Monitoring & Evaluation, Reporting and Learning Space

By Evaluation, Research, Workshop

The evaluation space can be a tricky one to navigate, especially considering that making evidence-based judgements about the merit or worth of programmes, what works and what does not work, is an integral part of the evaluation.

Development Works Changemakers (DWC) has been providing Monitoring & Evaluation (M&E) support and capacity development to a non-profit organisation working in the basic education space since 2018. This organisation wanted to expand its M&E system to also incorporate reporting and learning.

We recently introduced Appreciative Inquiry (AI) to assist them to build on the positive core of their existing reporting practice and to track and magnify that into an improving reporting practice in 2020, as part of moving from a traditional monitoring and evaluation (M&E) system, to a monitoring, evaluation reporting and learning (MERL) system.

Understanding Appreciative Inquiry

Appreciative Inquiry is a useful and interesting approach to create positive energy regarding reporting, by focusing on what works. The methodology focuses on what works best, but also identifies areas that need attention, or could be improved.

It can’t be used in every circumstance – but it is a great tool that can be very useful in certain situations. DWC has used AI to activate organisational change processes related to MERL (as in the example provided above);  to supplement Theory of Change (ToC) workshops, and to elicit data from different perspectives during evaluation processes.

What is Appreciative Inquiry?

AI is an action-research methodology that enables organisations to co-construct their desired future, and which focuses on the positive qualities of an organization. These positive qualities are leveraged to enhance the organization. AI is founded on 8 key principles, namely:

  1. Constructionist – Understanding a reality that is socially constructed through language and conversations
  2. Simultaneity – Inquiries create an intervention and initiate change
  3. Poetic – Organizations are an endless source of study and learning which constantly shapes the world as we know it
  4. Anticipatory – Using a hopeful image to inspire action
  5. Positive  – Believing that positive questions lead to positive change
  6. Wholeness – Bringing out the best in people and organizations to stimulate creativity and build collective capacity
  7. Enactment – Starting the process of positive change with self as a living model of the future
  8. Free choice – Believing that free choice liberates power and brings about enhanced results

Source of principles: Sideways Thoughts

Using AI in evaluations

AI was developed as an organizational change methodology but has been adapted to be used in evaluations. In the evaluation community, Appreciative Inquiry (AI) is at best not widely accepted, and is sometimes even frowned upon. However, it does offer a different approach that adds a unique value.

What evaluators have been doing for the past few decades is to focus on the judgment aspect of evaluation. What distinguishes evaluation from other applied social research is that it has to make a judgment on the merit or worth of programmes and projects.

Each case is unique and AI is not suitable for use in all evaluations.  Care must be given to the nature of the task at hand, and what other methodologies are being used in conjunction.

It should also be noted that AI is not an evaluation approach, and does not feature as an evaluation theory. It is merely a tool that can be used for data collection and process facilitation.

When does Appreciative Inquiry work?

As mentioned above, AI can work where energy is required to move processes forward. It could also be used in evaluations. AI works well in a context where a project or programme is not working so well. In such situations, project or programme stakeholders may become defensive when evaluators are appointed, as they anticipate negative judgement. The idea that our questions have the power to shape reality may be a frightening thought, but one worth exploring.

This may impede the openness of stakeholders, which makes it difficult to learn from failures or challenges. AI provides a non-threatening environment in which stakeholders can discuss a project without fear of judgement. By starting off with the identification of what works, a safe environment is provided to also discuss what does not work so well.

Understanding the approach

The underlying philosophy for AI is that what we focus our attention on in the social world will grow and develop. If we focus on the positive, the positive will grow and multiply, but if we focus on the negative, that will thrive instead.

This means that if we follow a problem-centred approach, we get stuck in the misfortune of the problem. The more we try to fix it, the more it grows.

Well, let’s be fair – sometimes problem-solving works, but how many problems did development initiatives (mostly based on a problem or deficit analysis) manage to solve over the past 50 or more years?

There are some conflicting opinions that speculate that you can’t just look at the positives – what about the negatives? In many ways, this concern is valid, and in others, it highlights how AI can be misunderstood.

AI does look at the negatives but in a different way so that it doesn’t dominate the conversation. The negatives/challenges get lifted out but in a more constructive way without pulling the energy down.

Steps in the AI process

In the monitoring and evaluation space, AI could be used as a fully-fledged AI process, or part of it could be used. The AI process is described in terms of the 4-D or the 5-D or 5-I models. These models can also be linked up to a planning process, which consists of some elements of the traditional SWOT planning process. SWOT planning looks at strengths, weaknesses, opportunities and threats. The SOAR process considers strengths and opportunities, and works with that, to develop aspirations, and articulate desired results.

Evidence that supports AI

Through a remarkable body of research, neuroscience has established that we affect people either positively or negatively by the way in which we engage with them and the way they perceive us (also as evaluators).

Prominent neuroscientist Evan Gordon (2000) reminds us that the “avoid danger and maximize reward” principle is an over-arching organizing principle in the brain, and translates in the approach-avoid response.

When our brain tags a stimulus as “good,” we engage in the stimulus (approach), and when our brain tags a stimulus as “bad,” we will disengage from it (avoid). Translated into the evaluation space, this means that if our evaluation processes are perceived as threatening by stakeholders, they may well disengage.

We also know that when people are “seen, heard and loved”, the associated surge in brain chemicals enable them to think better and creatively (connecting behaviour, or approach). Conversely, when people feel that they are criticized, judged and dismissed, their brains literally shut down, as they go into flight mode (avoiding behaviour, or disengagement).

The power of AI

There is a wealth of evidence that shows the power of our words. When athletes use positive imaging and words to tap into their potential to perform at their best, we think it is extraordinary. Why then, do we hesitate to use the same approach to propel our projects and organisations to perform at their best?

Can we as evaluators find a way of using generative questions to tap into what works, so that we can learn from it and amplify it?

The power of questions is aptly described by Browne (2008) who pointed out that every question has a direction, and because of the direction of the question it either carries generative or destructive energy.

AI is interested in generative questions – those that “build a bridge” or “turn on a light”. The rationale for AI is that if we pose provocative questions that discover the positive core of a project or programme, we can multiply and magnify what works.

By doing this tracking and fanning, we focus our energy on what works, and this creates the energy for the programme to grow in that positive direction.

Final Thoughts

Essentially AI promises a lot of potential, especially when used appropriately. When you identify what works and amplify it, great changes can be implemented.

AI is underpinned by a relational and conversational approach to human systems. This approach pays attention to the patterns in the system and the expressive relationship between the elements of the system.

Human systems are living systems, and in these systems patterns of belief; communication; action and reaction; sense-making and emotion; are important – these are the things that “give life” to the system.

At DWC, we specialize in a variety of methodologies and creative approaches. We will adjust and customise each approach depending on each organization’s specific needs, expectations and other contextual factors.  To find out more about how we can help your organization to measure, evaluate, shape and create positive change in a powerful way, contact Lindy Briginshaw (

By Fia van Rensburg

data literacy importance

Data Literacy – a language that speaks louder than protest action

By Research, Workshop

The South African Cities Network (SACN) hosted a “Municipal Finance Data Storytelling Workshop” on 5 November 2019 at the Tshisimani Centre for Activist Education at Tshimologong in Johannesburg.   Participants practically engaged with data through data storytelling and data journalism. 

Data storytelling and data journalism

Experts gave the following presentations : 

  • Overview of the State of the City Finances Report – Danga Mughogho, SA Cities Network 
  • The South African Cities Open Data Almanac (SCODA) and digital data stories – Jonathan Wilson, SA Cities Network; Richard Gevers, OpenData Durban
  • Data Journalism approaches to telling stories with data – Asanda Ngoasheng
  • Poster walk: Govtech innovations and Civil Society Stories  – Kirsten Pearson

data storytelling quote

The combination of background information, practical examples, links to key municipal finance information, sources and municipal finance data analysis tools, and a poster exhibition of recent initiatives to facilitate citizens’ access to key municipal data and the development of the capacity of citizens to engage with data and to hold municipalities accountable was an enlightening experience amidst the flood of dismal messages in the media following the Public Enterprises Minister’s recent announcement that Municipalities owe Eskom R23,5 billion.        

The most obvious valuable takeaway from the workshop was the opportunity to get down and practical. This is done by creating a data story in a group activity, guided by a data story template. This tool, and the skill of data storytelling, is not only useful to journalists but can be helpful to evaluators too. 

Data literacy

Another benefit of this workshop was the realisation of just how important data literacy is. Development Works Changemakers recently did an evaluation of a school-based software coding programme. It highlighted the importance of digital literacy, and specifically coding of a future form of literacy was highlighted.

In the not so distant future, the ability to code will be an imperative skill. Not only for software developers but in all fields. The South African Education Department is already implementing related initiatives and has intensified planning for future programmes. This is given the realisation that there is a vital need to capacitate teachers and to prepare learners for a data-driven world. 

These initiatives are often still pitched as relevant to certain career fields. Such as Science, Technology Engineering and Mathematics (STEM), engineering. Gradually the value beyond direct technical application is realised. This is expressed by Minister Angie Motshekga who said “This will not only develop STEM skills, but also contribute to effectively developing children’s creativity, critical thinking, design thinking, and digital skills. This will ensure that South Africa develops learners who are makers and inventors who will contribute to building an innovative culture in South Africa”.  

It is imperative that this intention shared by Minister Motshekga is acted on. There needs to be a follow-through by the Department of Education and key decision-makers.  This is only meaningful if these statements are translated into tangible action. This is to ensure access to quality education is enjoyed by all South Africans.

Active citizenship and accountable government

Critical thinking and digital skills are not only relevant to coding, but also relate to how we engage with data that is available in our everyday lives. It also influences the extent to which active citizenship is possible and effective. The Cities Network Workshop demonstrated how data literacy could enable ordinary citizens to engage actively and effectively with government at all levels. As well as how citizens can contribute to strengthening democracy. This goes beyond just participating in elections, and can hold government accountable in a constructive way. 

data literacy

Data literacy already is, and will become increasingly important, to enable citizens to play a more active role in ensuring that public finances are spent responsibly, and where needed most, for the best benefit of society.

The most compelling example of how data literacy can assist citizens to play an active role in communicating their needs and holding government accountable was found in the EU-funded Accounting for Basic Services (ABS) project.  This initiative which was implemented at local government level in selected communities through a partnership between various development stakeholders. The ABS project strengthened community engagement with local government to ensure “equitable, just and effective use of municipal funds”.

Through the project, the use of budget analysis and social accountability tools were promoted to engage communities. In addition it encourages responsive governance and emphasises accountability. The project demonstrated that communities and their organisations have the ability to understand and engage with municipal finances. The ABS project assisted communities to understand where and on what money is being spent; assess if government’s priorities and projects sufficiently address their needs; voice their concerns and needs, and keep government accountable.

Constructive and empowered participation

data literacy quote

This type of initiative may be key to constructively channel aspirations. It needs a strong sense of agency and involvement amongst ordinary people in initiatives. This helps make their voices heard clearly, and effectively. With higher levels of data literacy and active, empowered participation, it may be possible to find a language that speaks louder than protest action. It has the potential to ensure timely attention to pressing issues that could defuse the intense levels of frustration that frequently lead to confrontation and destruction of infrastructure. 

This project provides a glimpse of hope amidst many challenges by showing what is needed and what is possible. Imagine a future where data literate South African citizens are active participants in governing our country for the benefit of all?  

By Fia van Rensburg

africa's young leaders in discusssion

The inspirational potential of Africa’s young leaders

By Evaluation, Workshop2,544 Comments

Leadership in Africa is often reduced to a caricature of old male dictators destroying their countries through patronage, greed, violence and abuse of the state. This is often the sole focus of international news reports about Africa in particular.   Africa’s young leaders have a chance to change this. 

While this depressing picture can reflect one kind of African reality, it tends to obscure the many different kinds of positive leadership demonstrated by thousands of African citizens working at a number of levels in society. Taking Nigerian writer Chimamanda Ngozi Adichie’s warning seriously, it is important to challenge the single story of failed African leadership at the elite political level, with counter-narratives of multiple kinds of leadership emerging on the continent. 

The role of the youth

Perhaps if we could continue to build and harness this multi-faceted leadership potential, true democracy might start to loosen the grip of entrenched negative political leadership patterns. The younger generations are particularly important in this endeavour.     


The Young African Leaders Initiative (YALI) was launched in 2010 by President Barack Obama. It seeks to invest in the next generation of African leaders. Funded by the U.S. Department of State, it introduced the now widely respected Mandela-Washington Fellowship for Young African Leaders (MWF). Every year since 2014, hundreds of young people (ages 25-35) already demonstrating leadership potential have been selected from across Sub-Saharan Africa to participate in a six week “Leadership Institute” at a U.S college. 

This “Institute” is an intensive academic and practical leadership course informed by the Social Change Model of leadership. It is aimed at developing values-based and servant leadership among participants. Fellows are selected to participate from three areas: Business, Civic Engagement and Public Management. Alongside the many activities during the six-week course, Fellows also attend a Summit and are expected to develop a Leadership Development Plan (LDP) for implementation on their return to their home countries. 

youth at yali

To add value to these U.S.-based activities, USAID has sponsored several Africa-based “follow-on” activities which can be completed during the year-long Fellowship. These include

  • Professional Practicums (high-level internships at suitable companies);
  • Mentorships;
  • Speaker Travel Grants;
  • Continued Networking and Learning Events;
  • Collaboration Fund Grants; and
  • involvement in the Regional Advisory Boards.

Fellows have also gone on to form alumni associations in their respective countries and collaborate in various ways. 

Development Works Changemakers

In early 2019, Development Works Changemakers was commissioned to conduct an evaluation of the Africa-based follow-on activities. Along with an electronic questionnaire, and one-on-one Skype interviews, Development Works Changemakers conducted several country visits to meet with Mandela-Washington Fellows and learn about the impact of the follow-on activities on them.

We visited Ghana, Nigeria, Kenya and Zimbabwe, conducting in-depth focus group discussions and one-on-one interviews with Fellows and Practicum hosts. Broadly, the evaluation found that the Africa-based follow-on activities added significantly to the value of the U.S.-based Leadership Institute, cementing lessons through practical experience and building networks with graduates in Fellows’ home countries and elsewhere. 

Andrew Hartnack, a senior Evaluator with Development Works Changemakers Evaluator, visited Accra, Nairobi and Harare. He met with over 40 Mandela-Washington Fellows in the course of this evaluation. What stood out for Andrew in meeting these Fellows was their incredible energy, vision, integrity and passion to make a difference in their own sectors. 

Yali event

Kenyan Mandela-Washington Fellows collaborate by advising each other on projects they are working on. Here a Fellow trying to build a community hospital is getting important advice on her plans from a Fellow who is an architect, and a Fellow who works for the Ministry of Health in a Public Hospital.

The potential of Africa’s young leaders

Andrew met Fellows working in government Ministries who were positively influencing their colleagues and participating in various ways in building the institutional capacity of their units. He also met many young entrepreneurs who, through their MWF experience, had decided to apply their talents not just to money-making, but to the social issues they saw around them. For example, one fashion designer in Zimbabwe partnered with a local rural empowerment organisation to work with rural women in designing, making and marketing local products for sale. 

Other Fellows shifted their focus towards activism and lobbying on behalf of various constituencies which are under threat in their countries. In Zimbabwe and elsewhere, incredible bravery has been shown by a number of Fellows as they try to speak truth to power and make a difference in their countries. Fellows are building hospitals and orphanages, founding companies and non-profits, registering companies offering innovative solutions in areas such as climate change mitigation, and reforming government policy and practice in various ways. 

This crop of Africa’s young leaders – half of whom are women – are beginning to show what can be achieved with a little bit of support, and through networking and collaborating with other young people committed to making a difference. If Africa’s potential is to be realised, it is young people like this who must be the next leaders of economic, political and human development efforts on the continent. There is certainly cause for great hope if all this human potential can be fully harnessed.   

By Andrew Hartnack

TOC workshop for ECD

Integrating appreciative inquiry in a ToC workshop for an ECD programme

By Education, Workshop1,620 Comments

Early Childhood Development (ECD) at its best is about practitioners who put the child first and are more caring.  Development Works Changemakers endeavors to incorporate creative and innovative elements in all our work. Some years ago we included Appreciative Inquiry (AI) in a Theory of Change (ToC) workshop with a client who implements ECD programmes.

The TOC workshop

Firstly, the TOC workshop started with background on what ToC, Theory of Action (ToA) and Logic Models are, followed by examples of what a ToC can look like. Participants were then confident enough to build their programme’s ToC, guided by a set of questions. Participants worked hard, and through lively discussion and inputs, they plotted their high-level ToC. Below is an outline of the workshop process:

TOC workshop progress infographic

Once this brain-twisting exercise was done, participants went on to some “easier” work. The group had to consider what ECD looks like at its best. A thorough AI process can take an entire day or more but can be adapted. With only one day for the entire workshop, the team adapted the AI methodology to fit into an hour.  Four simple questions and five steps were used, as shown below:

ECD infographic

However, with only four participants, expectations were not very high of what could be achieved from the exercise with such a small group.  There was a risk of not getting any common themes at all. As a result, the process required that common themes had to be identified from participants’ stories and that these themes had to be prioritised according to their potential to design the best possible ECD programme.

The results

Despite these concerns, the group was not disappointed. It was proved once again that AI “lights up” our thinking and identifies important aspects that may be otherwise overlooked. As a result, The themes that emerged (ranked in terms of importance) were:

  • Putting children first – making environment ready for them; focusing on their needs and designing the school environment according to children’s needs, including those with diverse abilities.   
  • ECD practitioners who are more caring and go the extra mile for the learners.
  • Better relationships with parents and recognising them as key in their children’s development.
  • All stakeholders are motivated and engaged. 

Some of these themes were already incorporated to some extent, but the AI exercise made its importance clear, ensuring that these aspects will be included more prominently in the programme. Most importantly, however, is that the themes identified in the AI exercise points to critical values underpinning ECD at its best.  Infusing these values explicitly in the programme will enhance the impact of the programme.


In conclusion, the workshop evoked the following feedback:-

  • I have improved knowledge on how to develop a M&E ToC. I liked the practical aspects to the workshop.
  • Good workshop. Lots of headway made. Better understanding of the M&E tools for me. The process of getting to the final outcome was good. 
  • Now I understand the theory of change in ECD. 
  • To sum up, I loved the way she facilitated the workshop, simple, open-ended and quite fun.  

Article written by Fia van Rensburg  for Development Works Changemakers