The Harmony Institute and Bay Area Video Coalition‘s Impact Playbook is oft-cited in the impact space as key guide for media makers to develop impact and evaluation plans for their work. The guide highlights the quantitative over the qualitative indicators of impact whilst acknowledging the need for a flexible approach.
Squarely aimed at creators, the guide traverses the video production workflow from idea generation to outreach and distribution, with the strongest focus beginning in the post-production pre-release phase. Drawing on marketing strategies, there is a strong emphasis on data collection and rigorous gathering of data, before, during and after distribution for the purposes of comparison and evaluation.
Critically the Playbook encourages creators to set goals at the outset of their project. This is a welcome inclusion. In EngageMedia‘s distribution, outreach and engagement workshops we frequently found film-makers do not set goals in advance which impacts on the film’s effectiveness, makes outreach planning difficult, and means that impact measurement is even less likely to occur, especially given already limited budgets.
The encouragement to set hard goals at the outset does contrast with Lennie and Tacchi’s Communication for Development approach that eschews pre-defined goals for more fluid approaches throughout the process. No doubt they would argue against the statement that “only by clearly defining desired outcomes (at the project’s outset) is it possible to plan and carry out impact measurement.” We’ll explore more on the Communication for Development approach as this research progresses.
Whilst the Impact Playbook is reasonably simplified it assumes a baseline level of resources that might more appropriately be expected of professional or semi-professional film-makers who are backed with budgets; the Playbook is not aimed as a practical guide for citizen media makers with limited resources. The ability to carry out preliminary research, conduct interviews, set up metrics systems and track behaviour or policy change is highly valuable; but doing so requires significant inputs that even small to medium sized non-profits often do not have.
What would be useful is an adaptive toolkit approach that had small, medium and large-scale resource options that could be tailored to the resources each film-maker or group has available and to the scale or nature of their video project or initiative.
The overall scope of impact in the Playbook is also limited: for the most part to the post-production, distribution and engagement phases of a project. Whilst this is, of course, critical, there are also multiple points of impact that occur throughout the pre-production and production phases, whether it’s through engagement with interviewees and campaign organisations, or with participants involved in the production process. Articulating these processes and including measurement tools for understanding impacts through the entire workflow would be beneficial for any future systems.
From a Video for Change perspective, understanding who became actively involved and how (eg. joining or starting a discussion or group and contributing to the movement) is critical.
Additionally, as Tanya Notley notes in the Video For Change Impact Evaluation Scoping Study, the Playbook “does not answer important questions about unintended consequences or consider baseline goals that perhaps every project should consider (or be encouraged to consider) in the design and evaluation process. (For example, using a ‘do no harm approach’ or ‘applying core ethical principles’ or ‘being accountable to the communities you want to support’.)”
Despite these limitations, the Playbook is a great explanatory document and an introductory guide to help film-makers consider how they might go about thinking about and measuring impact. A gap remains (and this not one the Playbook necessarily intended to fill) in producing a resource that can purposefully guide video-makers through the process of developing their own strategy in more detail, tailored to the level of resources they have available. What would be helpful is a series of dynamic templates that provide building blocks for users to construct their own impact evaluation framework.
A more fundamental solution to these problems, given resources, continue to limit the measurement of impact, would be to convince funders to accept and then encourage the routine inclusion of at least 5% of any project budget towards impact evaluation. As the impact space grows, perhaps this is an improvement within real reach?