ethics in researach

Ethics in research and how to handle socio-economic challenges in fieldwork

By | Ethics, Evaluation, Research

It is imperative for researchers to abide by clear research ethics in order to conduct their work in a professional and ethical manner. Simply put, ethics are a set of rules that distinguish between “right” and “wrong” and “bad” and “good” in any situation. Ethics are about the norms for conduct that distinguish between acceptable and unacceptable behaviour in society.

Ethics in research

In research with human subjects, maintaining sound ethics is crucial at every stage. Be it in the research design, fieldwork or writing up and sharing of findings. At the most basic level, research ethics are informed by the principle of “do no harm”. Most of the codes of ethics used in research today were developed for the medical field, where trials/research on human subjects are common.

Many of the principles developed in these codes apply to social/development research and evaluation, including “do no harm”, and the need to bear in mind the power differentials between the researcher and the research subjects. Ethical characteristics, therefore, include honesty, objectivity, integrity, carefulness, openness, respect, and confidentiality.

One of the most important aspects of any research ethics code is informed consent. A participant has a right to understand fully the purpose of the research and the risks and benefits of participation. They have the right to anonymity, and to withdraw from the research at any point, or refuse to answer any question.

Vulnerable groups

When research is being conducted with vulnerable groups or individuals, such as children, refugees, people who are abused or ill, minorities etc. these principles take on even greater importance due to the power differential in the research relationship. The risk of harm to the participant, either during the research process or as a result of the publication of findings. Research design should thus include ways to reduce or minimise the risk of harm.

However, conducting social research is often challenging and throws up complex scenarios that are not ethically straightforward. Successful and ethical research outcomes require properly trained and well-prepared researchers. Research plans and proposed methodologies in certain cases (e.g. research with children or other vulnerable subjects) need to be submitted to a recognised research ethics committee for approval and guidance before any fieldwork can commence. It is also imperative that research abides by the various laws that apply in any country regarding research generally, and with vulnerable populations.

Overcoming challenges

At Development Works Changemakers (DWC) we take research ethics very seriously in all our research and evaluation activities. All of our senior research staff hold postgraduate degrees, have taken courses in research ethics, and have conducted advanced research requiring ethics clearance. They are thus in a position to lead fieldwork teams in ethical research practice. DWC also raises ethical issues from the outset with every partner or client. We also factor research ethics clearance into our proposed budgets and project timeframes.

Training fieldworkers

DWC works with an extensive network of trusted associates and freelancers on repeat assignments. This allows trust to be established over time and our ethical approach to be embedded. Our team is also rigorous with recruiting and managing new fieldworkers to ensure quality standards are always adhered to. Fieldworkers are provided with a detailed contextual understanding and briefing. Ideally, we work with researchers who are located from the community where the research is taking place. This ensures ownership and a deep sense of community connection, understanding and networks.

Fieldworkers go through a detailed training programme before a fieldwork intervention. We focus on the local context, research, ethics, requirements and expectations, study objectives, methodology and tools to be used.

Risk mitigation

The team also roleplays and discusses different possible risks and challenges that may arise through scenario planning and how best to mitigate any problems or challenges that may be experienced in the field.  Technical training is also provided on data gathering using tablets and mobile phones. Research teams are always fully prepared and well-oriented to carry out their fieldwork assignments as optimally and successfully as possible.

Given challenging socioeconomic conditions, risks do materialise whilst in the field. This includes security risks such as crime and safety of fieldworkers and equipment.  No research study is worth risking the safety of a team member. At all times ethical behaviour guides all decisions we make whilst running challenging research and evaluation assignments. Especially in under-resourced communities where risks are high.

People are unpredictable and sometimes community dynamics and political contexts are complicated. No matter how well-trained fieldworkers may be, working with communities can bring about unexpected challenges when they respond in different or unpredictable ways.

Understand the circumstances

It is important to be appreciative of participants’ time and input. However, a balance is needed in respect of any material payment or gift offered in return for participation. Airtime, a snack or small meal may be provided in return for a person’s participation in an interview. Our team shows gratitude and appreciation, in line with good research ethical practice and guidelines.

Fieldworkers always need to be trained in handling unexpected situations in a professional and ethical manner. If in doubt, there is always a senior member of staff to guide them in such situations. Treating people with respect, dignity and tact, and explaining the project objectives and terms carefully helps ensure mutual respect, good research practice and positive results.

Our DWC portfolio is a testament to how we practice ethics and understanding in the workplace. We’d love to work with you.

africa's young leaders in discusssion

The inspirational potential of Africa’s young leaders

By | Evaluation, Workshop | 1,564 Comments

Leadership in Africa is often reduced to a caricature of old male dictators destroying their countries through patronage, greed, violence and abuse of the state. This is often the sole focus of international news reports about Africa in particular.   Africa’s young leaders have a chance to change this. 

While this depressing picture can reflect one kind of African reality, it tends to obscure the many different kinds of positive leadership demonstrated by thousands of African citizens working at a number of levels in society. Taking Nigerian writer Chimamanda Ngozi Adichie’s warning seriously, it is important to challenge the single story of failed African leadership at the elite political level, with counter-narratives of multiple kinds of leadership emerging on the continent. 

The role of the youth

Perhaps if we could continue to build and harness this multi-faceted leadership potential, true democracy might start to loosen the grip of entrenched negative political leadership patterns. The younger generations are particularly important in this endeavour.     


The Young African Leaders Initiative (YALI) was launched in 2010 by President Barack Obama. It seeks to invest in the next generation of African leaders. Funded by the U.S. Department of State, it introduced the now widely respected Mandela-Washington Fellowship for Young African Leaders (MWF). Every year since 2014, hundreds of young people (ages 25-35) already demonstrating leadership potential have been selected from across Sub-Saharan Africa to participate in a six week “Leadership Institute” at a U.S college. 

This “Institute” is an intensive academic and practical leadership course informed by the Social Change Model of leadership. It is aimed at developing values-based and servant leadership among participants. Fellows are selected to participate from three areas: Business, Civic Engagement and Public Management. Alongside the many activities during the six-week course, Fellows also attend a Summit and are expected to develop a Leadership Development Plan (LDP) for implementation on their return to their home countries. 

youth at yali

To add value to these U.S.-based activities, USAID has sponsored several Africa-based “follow-on” activities which can be completed during the year-long Fellowship. These include

  • Professional Practicums (high-level internships at suitable companies);
  • Mentorships;
  • Speaker Travel Grants;
  • Continued Networking and Learning Events;
  • Collaboration Fund Grants; and
  • involvement in the Regional Advisory Boards.

Fellows have also gone on to form alumni associations in their respective countries and collaborate in various ways. 

Development Works Changemakers

In early 2019, Development Works Changemakers was commissioned to conduct an evaluation of the Africa-based follow-on activities. Along with an electronic questionnaire, and one-on-one Skype interviews, Development Works Changemakers conducted several country visits to meet with Mandela-Washington Fellows and learn about the impact of the follow-on activities on them.

We visited Ghana, Nigeria, Kenya and Zimbabwe, conducting in-depth focus group discussions and one-on-one interviews with Fellows and Practicum hosts. Broadly, the evaluation found that the Africa-based follow-on activities added significantly to the value of the U.S.-based Leadership Institute, cementing lessons through practical experience and building networks with graduates in Fellows’ home countries and elsewhere. 

Andrew Hartnack, a senior Evaluator with Development Works Changemakers Evaluator, visited Accra, Nairobi and Harare. He met with over 40 Mandela-Washington Fellows in the course of this evaluation. What stood out for Andrew in meeting these Fellows was their incredible energy, vision, integrity and passion to make a difference in their own sectors. 

Yali event

Kenyan Mandela-Washington Fellows collaborate by advising each other on projects they are working on. Here a Fellow trying to build a community hospital is getting important advice on her plans from a Fellow who is an architect, and a Fellow who works for the Ministry of Health in a Public Hospital.

The potential of Africa’s young leaders

Andrew met Fellows working in government Ministries who were positively influencing their colleagues and participating in various ways in building the institutional capacity of their units. He also met many young entrepreneurs who, through their MWF experience, had decided to apply their talents not just to money-making, but to the social issues they saw around them. For example, one fashion designer in Zimbabwe partnered with a local rural empowerment organisation to work with rural women in designing, making and marketing local products for sale. 

Other Fellows shifted their focus towards activism and lobbying on behalf of various constituencies which are under threat in their countries. In Zimbabwe and elsewhere, incredible bravery has been shown by a number of Fellows as they try to speak truth to power and make a difference in their countries. Fellows are building hospitals and orphanages, founding companies and non-profits, registering companies offering innovative solutions in areas such as climate change mitigation, and reforming government policy and practice in various ways. 

This crop of Africa’s young leaders – half of whom are women – are beginning to show what can be achieved with a little bit of support, and through networking and collaborating with other young people committed to making a difference. If Africa’s potential is to be realised, it is young people like this who must be the next leaders of economic, political and human development efforts on the continent. There is certainly cause for great hope if all this human potential can be fully harnessed.   

By Andrew Hartnack

rapid evaluation

Rapid Evaluation

By | Evaluation | 8,052 Comments

Rapid evaluations, assessments and reviews are becoming more and more relevant, as the need increases for quick and reliable evidence to be available when it is needed. In this quest, the balance between timeliness, cost, and rigour is essential.

rapid synonymsWhen the word “rapid” is used in the context of evaluation, care should be taken to find out exactly what is required. The range of terminology incorporating the word “rapid”, is extensive. Semantics are important, and each of the terms below have their own nuances. We will take a closer look at Rapid Evaluation, Rapid Assessment and Rapid Review. 

Rapid evaluation, assessment, and appraisal

Methods of rapid evaluation, assessment and appraisal, are mostly qualitative and emanate from ethnography. Various approaches 1 can be used in rapid evaluation.

Rapid Evaluation

What is it? An approach that employs intensive, team-based fieldwork, multiple methods for data collection, iterative processes of data collection and analysis, simultaneous data analysis, and community participation. 

What is it rooted in? It comes from the tradition of cultural anthropology and ethnography.

What can it be used for? It is suitable where limited time or other resources are available, and where issues in the question are not yet clearly articulated. It can quickly generate a holistic understanding of a programme from multiple perspectives, including programme “insiders” and “outsiders”.

What are the primary data collection methods? Data collection methods are mostly qualitative: interviews, direct observations, focus group discussions, mapping. Quantitative techniques, e.g. surveys can also be used. 

How long does it take? 4 to 6 weeks. 

What are the advantages? It is fast, fast, cost-effective, produces accurate data, provide “insider perspectives” on complicated problems, works well for investigating emerging problems or specific issues. 

What are the disadvantages? Less precise than more structured evaluations Limited scope and depth.

Take note: “Rapid” does not mean “rushed”. Although rapid evaluations are quicker to execute than traditional evaluations, extensive up-front preparations are required, coupled with meticulous planning and skillful scheduling to ensure strategic use of methods, synchronisation of data collection and analysis. Requires a team of trained evaluators Evaluation team leader must be highly trained in qualitative research methods.

Source: I -Tech Technical Implementation Guide #6\.2008. Rapid Evaluation2.

Rapid Reviews

What is it? Rapid reviews use components of the systematic review for knowledge synthesis, in a simplified process3, and can be described as “a form of evidence synthesis that may provide more timely information for decision making compared with standard systematic reviews”.4

What is it rooted in? Rapid review is rooted in Systematic Review methodology. Systematic Reviews use “systematic and explicit methods to identify, select, critically appraise, and extract and analyze data from relevant research”.5

Source: Virginia commonwealth university. Research Guides. Rapid Review Protocol.  What is a rapid review? 6

What can it be used for?  Rapid reviews can be used for exploring new or emerging research topics, or critical topics. It can be used to produce updates of previous reviews, to assess what knowledge already exists about a policy or practice.

How is it done?  Methods vary, and depends on factors such as the type of review, resources available, and quality of literature, as well as the experience of reviewers.

How long does it take? Different sources give different estimates, varying from less than 5 weeks, to one to 12 weeks, and up to 6 months.

What are the advantages? It speeds up the systematic review process because it omits stages of the process. It makes it possible to produce a review at short notice when doing a systematic review is not practical.

What are the limitations? The process is less rigorous, and the search is not as comprehensive as in a systematic review. It necessitates “cutting corners”, and researchers must be mindful that this may lead to bias. Findings have to be interpreted cautiously, and may have limitations.

Take note: Content experts and those experienced with systematic reviews have to be included. Rapid reviews are “ill defined” – no universal definition exists.8

Written by Fia van Rensburg


[1] McNall, M., Foster-Fisherman, P.G. 2007. Methods of Rapid Evaluation, Assessment and Appraisal. American Journal of Evaluation. Volume: 28 issue: 2, page(s): 151-168. Issue published: June 1, 2007 Available on:


[3] Temple University Libraries.

[4] Tricco, A.C., Jesmin, A, Straus, S.E. n.d. Systematic reviews vs. Rapid reviews: What’s the didfference? CADTH Rapid Review Summit. University of Toronto

[5] Higgins, J.P.T, and Green, S. 2011. Cochrane Handbook for Systematic Reviews of Interventions V5.1.0. Available on:

[6] Other sources suggest different timeframes.


[8] Cochrane: Rapid Reviews-An Introduction. Available on:



township programme

Programme adaptability is positive if handled correctly

By | Evaluation | 6,000 Comments

The research and evaluation team at Development Works Changemakers (DWC) often finds that organisations whose projects and programmes we are evaluating struggle with the fact that they have had to change their projects and have not been able to implement exactly what was originally planned. 

Of course, it is very important to plan projects soundly and to develop realistic theories of change and action from the outset. This ensures we clearly map out all assumptions, inputs, activities, outputs, outcomes and impacts. Also included are the linkages between these. Quite often, organisations working in the social and human development sector initiate projects without properly planning and theorising in this rigorous way. This can cause them problems later on. Not only in proving their impact, but also in achieving their goals. 

Project fidelity

Project fidelity refers to the extent to which a programme is implemented as it was intended by those who designed the intervention. It is important. The chances of maintaining fidelity can be improved by ensuring that programme implementers first and foremost themselves understand clearly what they intend to do and how they intend to do it. This is not always the case. Many smaller projects develop more organically and with more “heart” than “head” at the outset. Yet funders still want to know how their money is being spent. They also want to know what the impact of their investment has been. 

Nevertheless, it is also true that even with good planning and theorising, effective implementation of an intervention is best understood as a careful balance between fidelity and adaptability. Adaptability refers to the extent to which implementation is adjusted to the context and conditions in which it is operating. A balance between these two facets increases the likelihood that a programme will achieve its intended outcomes.

In fact, some studies indicate that programmes that frequently produce the most effective results are those that encourage rather than deter needed adaptations1 and that such adaptability enhances project sustainability and long-lasting impact2. 

Thus, although greater adaptability undermines implementation fidelity, it is not always as bad as it may seem.  Especially if such adaptations can be managed. For example, by trying to ensure that core programme features are implemented with fidelity. Less essential features are then adapted to achieve the best ecological fit3.

Importance of programme adaptability

The evidence on adaptability emphasises that contextual factors are an important consideration for effective programme implementation. According to Durlak and DuPre’s (2008) review of over 500 quantitative studies looking at the impact of implementation on programme outcomes and what factors affect implementation, there is strong empirical evidence to show that various contextual factors influence programme implementation.

These include community-level factors (including politics, sociocultural factors, policy and funding); implementer and service provider characteristics (including capacity, leadership, staffing, support systems); and characteristics of innovation (compatibility of the programme to the given context and the adaptability of the programme to fit provider preferences, community needs, values, cultural norms etc.). 

bridging connections

A case study of programme adaptability

A recent programme design and implementation evaluation conducted by DWC illustrates the above situation well. The programme was implemented in a very complex and fluid township context, among young people. It was initially planned to provide skills training sessions and other activities over the course of a year to fixed groups of recent school-leavers.

The programme made use of a high-tech biometric system to manage the registration, recruitment and other aspects of the intervention. However, the implementers soon found that in almost every aspect of the programme they needed to adapt their approach to the local context and needs of the young people.

The initial beneficiary recruitment approach did not work and had to be fundamentally adapted. It was also found that high numbers of young people were dropping out of the year-long programme. This was due to their once-monthly sessions in a fixed group did not suit their needs. Instead, many would leave the area or find a job and not complete.

The programme thus changed to be run weekly over a 12 week period, which was much more successful. The biometric system also needed a lot of development over the period of implementation. Despite all these adaptations and challenges, by the end of the programme period (three years), the programme had evolved an approach which worked and was effective.

Despite concerns from funders that the original project plans and protocols had not been followed, it was apparent that adaptability was required and that the lessons learned in this programme could inform similar interventions in future. 

Correct Procedures

However, another recent evaluation conducted by DWC has underlined the importance of properly documenting and dating changes and adaptations along the way. This ensures that the correct procedures are followed throughout. In the classic project management approach, this kind of process would be expected. There would be consequences if the process was not followed. However, it is too often the case in social programmes that implementers are not always adequately disciplined in this regard. This makes it very difficult for evaluators trying to assess the project in an open and accountable way.   

DWC has evaluated several programmes where similar dynamics have been apparent. While we have tried to assist implementers to ensure sound planning and fidelity. For example, through theory of change workshops. We have also helped implementers and funders to understand that if managed effectively, adaptability can also be a key contributor to programme success and sustainability. 

Written by Andrew Hartnack

1Forehand, R., Dorsey, S., Jones, D. J., Long, N., & McMahon, R. J. (2010). “Adherence and flexibility: They can (and do) coexist!” Clinical Psychology: Science and Practice, 17, 258-264.)

2Ghate, D. (2016). ”From programs to systems: Deploying implementation science and practice for sustained real world effectiveness in services for children and families.” Journal of Clinical Child & Adolescent Psychology, 45(6), 812-826.

3Durlak, J. A., & DuPre, E. P. (2008). “Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation.” American Journal of Community Psychology, 41, 327-350.; Ghate (2016)

women's month

Vuka Evaluators, Vuka! It’s Women’s Month!

By | Evaluation, Gender | 1,666 Comments

Women’s Day and Women’s Month reminds us of women’s struggles and strengths. We get excited, celebrate the day, join in women’s month activities and then the energy dissipates, and everything goes back to normal. Not normal. Because gender equality remains an elusive dream.

This is not because of the government’s lack of real political will, not because of patriarchy, not because of lack of funding, not because of whatever excuse we choose. Because we, as evaluators do not do enough. Yes. In South Africa, we do not do enough to make sure gender is integrated into evaluation.

Gender and evaluation in Women’s Month

Guess what popped up in a google search on “gender-sensitive evaluation”?

OECD, UN Women, FAO, Better Evaluation, GIZ…. nothing about South Africa. Another ty – this time the mainstreaming buzzword: “Gender mainstreaming in evaluation”.  Same thing happens: UNDP, EIGE, FAO, UN Evaluation, OSAGI, OECD.

Can’t be true. Let’s try “gender in evaluation South Africa”. Genderlinks, Sonke Gender Justice, HEARD, National Gender Policy Framework Environmental Affairs, and then, eventually “Draft Gender Responsive Planning Framework, DPME 2018” and lo and behold “A gendered review of South Africa’s Implementation of the Millennium Development Goals”. Wonderful. The only problem here is that there is no prominent thought leadership on gendered evaluations.

Granted, a random google search may not be representative of what is being done, but it surely shows that gender in evaluation is not a very prominent theme in South Africa. Some initiatives have been taken, and some of us have integrated gender in our work. But we have to admit that largely, as evaluators, we have not been holding the gender equality torch very high, and we have been running rather slowly in this race.

The role of evaluators

We have to accept that we, as evaluators, are part of the problem. Some of us know that gender-responsive evaluation can make a huge difference in integrating gender in the development agenda. Some of us don’t realise how important it is. Either way, there is no concerted effort to ensure that gender considerations are integrated into evaluations.

Yes, very few of our clients request gender analysis. And yes, our clients do not systematically collect gender-disaggregated data, or consider issues of gender in their planning.  But we cannot blame project planners and funders for not including gender in projects. We also have a role to play.

It is us, the evaluators, who do too little to promote the gender agenda. It is us who do not insist on applying a gender lens in our evaluations. There are excellent entry points: we are familiar with theory-based evaluation, and a Theory of Change or a Clarificatory/Design Evaluation is an excellent opportunity to integrate issues of gender in our work and in the client system.

As evaluators, we can lead the way, initiate the discussion, stimulate thought… Does this programme work the same for men and for women? What are the assumptions related to men and women, respectively? Are the change mechanisms the same for men and for women?

Evidence-based policy-making

Another entry point is the commitment to evidence-based policy-making. It provides an opportunity for the development of gender-sensitive indicators – quantitative indicators based on sex-disaggregated statistical data, and qualitative indicators:

  • Increases in women’s’ levels of empowerment,
  • Changes in attitudes about gender equality;
  • Changes in the relations between men and women;
  • Diverse outcomes of a particular policy,
  • Programme or activity for women and men;
  • How the status of men and women have changed in terms of poverty,
  • Education etc.

There is no need to push it down the client’s throat – just do it, routinely, systematically, firmly, and most importantly, persistently. By doing this, opportunities will be created for clients to reflect on issues of gender, and its significance for their work, for policy-making and ultimately changing our country into a better place for both men and women.

Changing the meaning of Women’s Month

There are two simple things we can do to make a difference: Firstly, make sure that gender disaggregated data is collected and analysed. We need to “let the data speak” about the realities of boys and girls, men and women.

Secondly, we need to realise that it is not only about women – it is about gender. Both genders. It is about the different needs and experiences of girls and boys, men and women. And we also need to know that “both genders” is still a gross simplification, because the gender identity spectrum is much more intricate and nuanced than that. But considering “both genders” is a starting point.

With all of this in mind, the message this women’s month is: Vuka evaluators, vuka! It’s Women’s Month. Let’s make a conscious decision to take a lead to make gender equality a reality. Let’s run a little faster in this race, and hold the torch of gender equality high.

Written by Fia van Rensburg

social impact project

The Question of Value for Money: Using SROI in Evaluation

By | Evaluation | 1,756 Comments

Evaluation is the most effective way to answer the question, “is my programme really working?” By using various methods to understand and systematically analyse how programmes are designed, implemented and work, evaluations unpack their tangible and intangible effects. This includes how valuable the intervention is and recommendations to improve effectiveness. That’s where SROI in evaluation comes in. 

“But how do I know I’m getting value for my investment?” is an often-asked question.


balancing scale


Achieving desired outcomes

Sometimes achieving desired outcomes doesn’t provide sufficient motivation for the continuation of a programme. Corporate social investment, for example, may be expected to provide greater returns than the initial programme investment.

Although evaluation is effective in measuring outcomes, it doesn’t provide an actual monetary value of a programme. If that’s what you’re looking for then an evaluation approach such as the Social Return on Investment (SROI) method could be most effective in illustrating value.

SROI in evaluation

SROI is an approach used to measure social impact by providing a value for the economic, environmental and social effects of a programme. An SROI ratio gives an approximate measurement, in Rand value, of the social value created for every R1 invested.

SROI in evaluation graphicDespite evaluation and SROI being distinct ways to understand and measure social impact, both are evidence-based approaches founded on the principles of transparency, participation, and verification.

And although the information produced by each approach is different, when combined, this information may be more useful and compelling for decision-makers. In this sense, SROI can be viewed as an additional step in evaluation.

Social development

People matter, but social development money matters too. And especially in more prudent times, there is perhaps no stronger way to make a business case. Evaluators, as jacks-of-all-trades and should not shy away from equipping decision-makers with as much information as possible to improve interventions. For some evaluations, SROI might be exactly what you need.

Finally, it’s the job of the evaluator to support decision-making in a way that optimises and maximises value. In this way, highlighting that value with every method you can.

theory of change demonstration

Understanding the Theory of Change (ToC) and Theory of Action (ToA)

By | Evaluation | 1,873 Comments

In its intricacies, a key to program evaluation is the application of a logic model that must be thoroughly applied to assess degrees of social impact and/or change. The idea is to apply theory to actualize and implement real change at the ground level. The program theory, responsible for trying to map out what any given program does at current, contains two parts. These parts are the theory of change (ToC) and the theory of action (ToA).

Theory of Change vs. Theory of Action

The ToC focuses on the dynamics of change and the drivers through which change comes about, irrespective of any planned intervention. Simply put, ToC answers why you expect change to happen and, through a series of ‘If, then’ statements, how and why change occurs.

The ToA displays how interventions are constructed to activate the ToC. This operationalization of the ToC illustrates what the program does, how this triggers the change process and identifies critical assumptions.

The final analysis of the ToA will either show support for the current program functionality or identify key gaps that are of a hindrance to the program functionality. This specific action plan mobilizes the ‘If, then’ statements.

The Logic Model

This two-fold program theory is then applied to the standard framework. The logic model is made up of five parts.

1. Inputs

The logic model first begins with the identification of inputs—resources that must be compiled to execute the activities that are to follow, in an ultimate effort to get the program kick-started. Some examples of inputs might be partnerships or funding that will be used for stipends.

2. Activities

Secondly, activities are actions that lead to the desired change, specifically the process that a beneficiary goes through. This might look like workshops, training, and/or services.

3. Outputs

Outputs are immediate, tangible products of the activities. They are usually listed as a series of ideal products that are a reflection of how the program gets rolled out. An example could be the number of learners enrolled in an after-school program.

4. Outcomes

Outcomes, which are easily confused with outputs, are desired, intangible results or rather, what you want your beneficiaries to go out into the world and actualize—improvement in behaviours or beliefs, for example. These outcomes are usually presented as a triumvirate—short term, medium term, and long term.

Unintended outcomes are also important to identify at this stage, as they are often a reflection of blind spots that weren’t accounted for in the key stages of creating the program’s blueprint.

5. Impact

Lastly, the impact is a macro level reflection of an ultimate goal that the program seeks to achieve at the ground level. Such impacts are an attempt to affect systemic change like alleviating poverty or reducing unemployment.

Where possible, Development Works Changemakers will produce a ToC and ToA as part of the evaluation process, when undertaking a program evaluation. This process usually involves a series of interactive workshops with key programme stakeholders at the onset of an evaluation.

By Malanna Wheat