Assessing the impact of voluntary activities in improving the social well-being of communities: The matter has become topical once again, following the publication of the new guidelines for non-profit and voluntary sector institutions in the Official Journal. A text which calls for non-profit organisations to combine quality and quantity when reporting their activities, following the most suitable methodology for their operating environment. In the latest issue of Vdossier, a magazine published by ten Italian volunteer centres, considerations and experiences of social impact assessment experts are reported.
International cooperation: The main way is flexibility
Laura Moretti’s interview with Bruno Baroni, AVSI Head of Monitoring and Evaluation (M&E) in Congo and South Sudan.
When could international cooperation serve as an example for small voluntary organisations with regard to assessing the social impact? We asked Bruno Baroni, AVSI Head of Monitoring and Evaluation (M&E) in Congo and South Sudan, to describe us with what issues and questions they face in the field, when assessing the social value of a project. «I believe that it is necessary to make a premise: The topic of monitoring and evaluation (M&E) can seems very technical – a matter to be left to discussions among experts on the subject; the opposite is true: the ultimate aim of M&E is nothing else than to inspire a reflection that should guide an action, then it makes sense only if conducted together with those involved in the project design and implementation. Indeed, striving to identify and evaluate the changes produced by our projects is an activity that already in itself changes the way of thinking and acting – that make us “result-oriented”, as business management experts would put it. It goes without saying that the full potential of M&E disappears when relegated to discussions among experts or shared only with outside our organizations, for instance with donors.
Is the assessment of social impact a way of learning?
Besides finding evidence of a change, M/E should help us to better understand the context in which our projects takes place, so that we can reach an explanation of “how” the change happens, which is the most important thing to know for practitioners like us, who have the responsibility and privilege of “making things happen”. If we do that well, not only we create knowledge to write better projects in the future, but – already during the reflection – we find better ways to conduct even our ongoing interventions.
At AVSI we pay much attention to collect data and then validate them in the communities. By doing so, through a fact-based discussion, we discover better ways to conduct our projects and our understanding grows along with that of the community, who feels part of a genuine partnership. With regard to longer projects, we try to shape reflection communities whereby instead of suing predefined questionnaires, we develop tools to gather information on the basis of considerations that comes from our personnel on the field and from the beneficiaries, to whom we ask to share their reflection of the first signs of change, but also to share their ambitions and even worries. It is a participatory and empirical M&E, which is done, first and foremost, together with and to the benefit of the people who work within AVSI and who interact more frequently with beneficiaries. It is a way to use M&E as an opportunity to improve the way we work , going far beyond the mere compliance with donors’ request to report. Put it simply, thinking about what you do is useful to do it better; thinking about why you do it is useful to motivate yourself and help to think about the future, not only the present; and doing all this together really makes us become a team. We have witnessed it in AVSI: people are more aware of their roles, feel more motivated, and give themselves to the challenge. Beware not to look at this sort of M&E as a luxe, a plus, a beneficial only for the internal staff or to be accountable to the community. Doing so it is key to get your next project funded too: knowing how to describe your way of working in a specific context – what is defined as “your specific added value” – is what donors pay increasing attention to when allocating funds.
Does assessing social impact mean looking not only to beneficiaries but to the entire community?
An important difference, often overlooked, is the one between evaluating – and designing – projects meant to help people taken individually, or together as members of a community. In the latter case, I would pay particular attention to the way people help each other, and I would probably support those mechanisms, rather than thinking about introducing alien models, which often reveal themselves ephemeral, when not detrimental. From a technical standpoint, it is a question of monitoring entire communities with household surveys, and not just individuals with assessment to specific families or individuals. From a conceptual point of view there is more, because I will focus on the capacity of the community to help each other, their strength, and not only their vulnerability taken singularly. The difference is huge. For instance, when I think in terms of individuals, I will assess the impact of a training project considering how the situation of its beneficiaries has changed, to such an extent that I will follow them in my monitoring in the case they leave their communities, concluding delightedly that they have finally found new opportunities in new places. Choosing indicators based on the conditions of whole communities, starting from those beneficiaries lived, I may conclude that the project was a failure, as the best and the brightest have left behind an even more impoverished community. The deadly sin of M/E is that, used wrongly, it has been instrumental to making us too much focused on providing a service to individuals, rather than promoting deep changes in entire communities.
With what new features is the phenomenon of social impact developing?
Impact evaluation is a technique which, in most cases, reduces the complexity of a project to a number – for example, the ability to find a job following a training course. Large organisations which allocate substantial resources to finance similar multi-year projects all over the world need to make comparisons, and have the necessity to reduce a project to a single number. Small organisations have different capacities and needs, plus work on a variety of projects, some of them which may not fit with strictly quantitative methods of inquiry. In this case, reducing the entire project to a number may be of little help, besides demanding technical expertise that is not so easy (and cheap) to access. A simple project evaluation, structured around the 5 standard qualitative criteria, may turn to be more useful. In other cases, the best option is a series of rapid evaluations made internally, as suggested by specialists of the “developmental evaluation”. If we cannot do without an impact assessments, what is recommended is to make it on the basis of a large number of indicators – not only one – having in this way the possibility to “find out” in practice which one is the most relevant.
Indeed, successful projects often reveals that things do not go as expected. Often, farmers that are supported through our projects share the extra food that they cultivated with their extended families. If we monitor only their income or what their food consumption we may not find evidence of any success – but the problem is not the project, or the farmer, is the M/E personnel who assumes to know what will happen and is too eager to measure it, instead of indulging in the art of discovering change and learning from it. I began to appreciate this difference some years ago, when I met some women, all coming from stories of social marginalisation, who had founded a cooperative in the suburbs of Guatemala City – I was told it was a success story. The women told me that they earned exactly the same as in the firm where they used to work before. Therefore, I could not understand the difference until I noticed that they produced small puppets made in textile and a woman said that she had learned to make 150 stuffed animals of different kinds and shapes, and that she was continuing to learn. She told me that she had learned a profession which she could continue if the cooperative closed and, if necessary, teach to her children. Put in the words of an economist on top of her salary she had a guarantee against unemployment and a capital to leave to her children, without mentioning the enviable self-esteem and determination that was visible in all her expressions. All this was not part of the survey I had in mind, but I learned my lesson and I manage to included it in my report. Now I say, think about that section of your report in advance, it may very likely be the most important – and rewording – of the document
We suppose that reality is linear and simple, but the change, understood as social impact, is complex by nature, assumes unexpected forms, is “emerging”. Instead, tools such as the impact assessment assume an extensive prior knowledge and, for this reason, are useful mostly in particular circumstances. They are particularly effective, for instance, to test the yield of more drought-resistant seeds. In other cases, especially those regarding people’s inner growth, we must be very careful not to come to the conclusion that what is hard to evaluate mathematically is less or no important. When we decide what to evaluate, we are deciding what is important. It is way better to have an approximate idea of what matters than a precise estimation of what is of little relevance.
Can we describe social change rather than evaluate it?
It should be recalled that there are also other tools, which have been tried for about twenty years now. The list would be long, but we can take the example of the most significant change, based on the collection of success stories, and the sharing of the reasons why these stories are considered successful – this second part is often omitted, but is crucial to give rise to that discussion/reflection which is the cornerstone of MSC. There is a huge difference between “measuring” and “narrating”, and the gap between them is crucial because recount change is suitable to capture nuances which, however, prove to be crucial.. Supporting M&E conventional techniques with more flexible and innovative methods can allow each organisation to find that mix that better matches its direction, technical capacity, and sensitivity. And so, we go back to the initial premise: It is important to discover how to speak inclusively about M&E, involving all the souls of an organisation. If M/E does not foster a more harmonious work of an entire organization, then we are not using it effectively and we are not involving enough people. My advice is to test different methods with pilot projects – not abandoning other techniques (each one has its own virtues and limitations), but rather trying to find the most suitable mix that fits your organisation as a whole.