6 MONITORING AND EVALUATION
- Project mangers generally had a good understanding of the purpose of monitoring and evaluation but practice was variable.
- Evaluation was often not planned into work programmes (which is a common weakness in community sector programmes, not only in the CCF).
- Perhaps understandably, and especially where projects have few resources, project teams' enthusiasm for getting people engaged may be prioritised ahead of evaluation.
- Some projects had attempted highly over-specified evaluations which they regretted; other projects appeared to have done little evaluation, which some are now trying to address. Most projects had done some monitoring and evaluation.
- The most effective evaluation approaches balanced robustness and simplicity, including:
- Simple recording systems for the number of energy efficiency measures adopted as a result of the project (which could then be used as the basis for indicative estimates of CO 2 reduction).
- Read-outs from energy monitors given out to participants
- Area-based energy consumption data (though it was difficult to obtain)
- Participant surveys - with projects emphasising a need for short questionnaires
- Recruiting a volunteer sub-sample who are willing to take part in more intensive monitoring (accepting that this is unlikely to be a representative sample)
- Projects frequently commented on the risk that evaluation deters participant engagement and eats up project resources, and that evaluation demands on participants need to be minimal.
- In this respect, some projects had experienced problems with carbon footprints (which need longer surveys to provide enough data) and participant diaries (which need voluntary and sustained interest from participants).
- A few projects had used external evaluators, with mixed results. A key lesson was that projects need to work closely with the evaluator to ensure the results will be useful to the project and not over-specified.
- The CCF Low Carbon Route Maps were a clear success (although a few projects seemed not to know about them). They provided projects with a simple approach that enabled them to produce 'good enough' data for their own, and the funders', purposes.
- Evaluation support offered through the CCF team was also valued by those who used it.
- Our own observation is that there is still a gap in robust methods for measuring the carbon impact of behavioural change (as opposed to estimating from installed measures).
6.2.1 The CCF adopted a relatively light-touch approach to monitoring and evaluation, asking for short quarterly reports and an end of funding evaluation report. Evaluation support was made available to projects - notably the CCF Low Carbon Route Maps (which set out a rough-and-ready approach to calculating carbon emission reductions achieved) and access to training and one-to-one evaluation advice.
6.2.2 Features of successful monitoring and evaluation approaches were:
- Allocation of time and resources for evaluation in project plans;
- Evaluation approaches that required minimal participant input;
- Balance between robust data and ease of data collection - the Low Carbon Route Maps struck a good balance; and
- Preparation of a clear evaluation plan at the start - evaluation support was extremely useful to projects in this respect.
Resourcing for monitoring and evaluation
6.3.1 The managers of the 21 projects taking part in the review demonstrated a good understanding of the purposes of evaluation, including the need to demonstrate impacts and account to the funder, and a desire to use evaluation as a learning tool to help improve the effectiveness of their delivery models. However, the quality of evaluation activities in practice was variable.
6.3.2 It was relatively rare for project managers to have specifically allocated time and funds for monitoring and evaluation. As already noted, (section 5.3.17), community projects are naturally inclined to work at full capacity, limiting the time and attention available for areas that are not immediately crucial to project delivery - such as evaluation. This may be further exacerbated by project staff finding delivery more interesting than evaluative activities.
6.3.3 If monitoring and evaluation activities are not properly planned into the project, there is a risk that only ad hoc data is gathered, resulting in a poor quality evaluation. Alternatively, attempting to gather robust data without sufficient resources can detract from project delivery.
Approaches to monitoring and evaluation
6.3.4 The projects' approaches to monitoring and evaluation were very varied. At one extreme, some had developed detailed plans for evaluating virtually every aspect of the project and were using specialist analytical techniques, while at the other extreme some had "not given it much thought yet". Most fell into the middle ground, having developed a plan for monitoring and evaluation and carrying out some data collection. Some had struck a better balance than others between robustness of data, ease of data collection from the project's perspective, and unobtrusiveness from the participants' perspective.
6.3.5 Successful evaluation approaches on energy projects included:
- Counting the number of installations of different types of insulation, other hard measures and renewable energy, and using standard conversion factors to calculate carbon savings ( e.g. East Neuk and Landward Energy Network); and
- Downloading data from energy monitors - although seasonality of energy use is a confounding factor, short-term data can demonstrate reductions in energy use ( e.g. Carbon Reduction Shetland).
6.3.6 A less successful approach was to ask participants to keep a record of their energy behaviours, for example through diaries - there was little evidence of participants remembering to do this.
6.3.7 Using actual energy consumption data can provide an accurate picture of project impacts, but can be very difficult to come by. One project that had successfully used this approach was Sustainable Solutions for Linlithgow, which had had managed to obtain year-on-year energy consumption data for the whole town. However, there was a significant time lag in this becoming available. Other projects that had tried to gain access to such data at the household level had had little success in doing so.
6.3.8 Successful evaluation approaches on transport projects included:
- Recording miles cycled during a cycle challenge (A Better Way to Work); and
- Short behaviour surveys (Active Leith) - long questionnaires are more likely to suffer from low completion rates.
6.3.9 Despite placing the onus on participants to record data, the cycle challenge yielded reasonably robust data. Reasons for this are likely to include the incentive of a prize for the challenge winner, as well as the fact that cycling is a behaviour that cyclists tend to be proud of - so they may be more willing to put some effort into keeping a record of their behaviour (in comparison to, for example, the average householder when it comes to everyday energy behaviours).
6.3.10 In addition, travel behaviour monitoring needs to be carried out at intervals during the year in order to identify impacts of seasonality.
6.3.11 Successful evaluation approaches on food projects included:
- Recording participant numbers, plot numbers and plot sizes on growing projects (compatible with the Low Carbon Route Maps); and
- Recruiting a group of research volunteers from among participants to complete detailed food purchasing behaviour surveys (Fife Diet).
6.3.12 A potential issue with the research volunteer approach is that they are likely to be the keenest members, they may make the largest changes (creating bias in the data), but for the same reason they are also likely to be most willing to complete surveys. Provided the limitations of the data obtained are acknowledged, as the Fife Diet have done, this seems a reasonably balanced evaluation approach.
6.3.13 The use of diaries on food growing projects appeared to be less successful, as none of the interviewees reported actively using them. Similarly to cycling, food growing is a behaviour that participants tend to be proud of, and there may be scope for gathering more detailed data through diaries - perhaps recruiting the keenest growers (similarly to the Fife Diet's research volunteer approach) to keep a diary could help to increase their use.
6.3.14 Successful evaluation approaches of waste activities included:
- Recording food waste volumes ( e.g. from kitchen caddies) - notably done by Carbon Busters; and
- Counting numbers of plastic bags passing through reuse points (done by Carbon Reduction Shetland during the pilot scheme).
6.3.15 Some of the projects running a range of activities were carrying out carbon footprint (or similar) surveys which aimed to give an overview of participants' environmental impacts in a range of areas. These kinds of surveys tend to be relatively long, and participants who have an interested in the environment are more likely than others to be willing to take part. In the case of Toryglen Transitions, however, where the project had built up a rapport with the local community (see also section 5.7 on building up a high profile), the survey seemed to successfully reach beyond the 'already interested'.
6.3.16 A final point that applies to all projects regardless of their subject area is that it is important, if requesting input from participant into monitoring and evaluation activities, to explain to participants why they are being asked for this information. In a small number of cases, this had been unclear to participants, who had become confused at best and suspicious at worst.
Balancing evaluation needs
6.3.17 It can be difficult to strike a balance between too much and too little evaluation, and neither extreme results in positive outcomes - an overly intensive evaluation is at best a poor use of a project's resources and at worst off-putting to participants, while too lax an evaluation fails to provide any robust evidence of a project's impact.
6.3.18 Many of the project managers we spoke to recognised the trade-offs, and only some - key examples including the Edinburgh Garden Share Scheme, East Neuk and Landward Energy Network and Sustainable Solutions for Linlithgow - had found a happy medium that they were comfortable with. Others at the further extremes (too little or too much evaluation) seemed to be at risk of veering too far towards the other extreme in future as a result of their initial experiences.
6.3.19 One possible approach is for projects to work with external evaluators. While this may be a more costly approach than in-house evaluation, in theory it could produce better quality data by virtue of being professionally done (as well as freeing up time for project delivery). In practice, however, some of the projects that had used external evaluators were disappointed with the outcomes - one, for example, felt that the evaluator's long questionnaires had put people off taking part in the project, and another found the evaluator's data was not detailed enough for the project's purposes. Projects need to work closely with their evaluators to ensure that the evaluation meets their needs - as, for example, the Fife Diet did with their carbon consultant.
6.3.20 The Low Carbon Route Maps were well received by most of those projects that were aware of them, though there was one comment about the food Route Map being too simplistic. In general, though, the Route Maps would appear to help strike a balance between obtaining robust data and not detracting from project delivery. They also appealed to those who struggled with evaluation and seemed to want to be told how to do it. Not all project managers were aware of the Route Maps, however, and increasing their awareness of these could be beneficial.
6.3.21 All project managers that had received evaluation support via the CCF, either in the form of attending workshops or receiving personalised one-to-one assistance, were appreciative of this support. Evaluation support seems most beneficial in the early stages of projects, when it can feed into evaluation plans. One-to-one support is particularly helpful in that it allows projects to resolve questions that are highly specific to their activities and delivery approach.