2 Overview of Evaluation
Evaluation Aims & Objectives
2.1 The main aim of the evaluation was to assess the development and impact of interventions implemented within the 3 local authorities test sites to improve uptake of SDS. The evaluation brief from Scottish Government was to:
- Describe current SDS policy, activity, and practice in the test sites to generate baseline measures/data
- Develop tools and frameworks with project managers at each site to evaluate progress
- Examine the extent to which each site addressed the 3 key areas (bridging finance; cutting red tape; leadership and training)
- Assess the impact of the interventions at each test site in progressing SDS
- Identify the implications for policy and practice within wider Scottish context
- Disseminate to relevant stakeholders.
2.2 The overarching framework for planning the evaluation was adapted from Scriven's (2003) Key Evaluation Checklist (cited in Davidson, 2005), which accentuates the importance of process as well as outcome evaluation, and triangulation of data types and sources to support robust conclusions. The evaluation design was mixed method, gathering mainly qualitative information from a range of stakeholders (service users, carers, professionals) in each test site area through interviews, focus groups, learning sets, and reports.
2.3 Adopting an approach based upon the 3 interventions/themes as its overarching framework presented some challenges. One test site explicitly drafted an action plan around personalisation activities and not the Scottish Government themes. Also, the commissioned study was not an evaluation of IBs or DPs specifically, nor of SDS compared to traditional services. The evaluation design therefore attempted to capture information about the processes and changes as implemented by each test site according to their local operational definitions of SDS. The evaluation team worked with local project managers to tailor data collection to local circumstances, and different stakeholders were involved through learning sets and in a final stakeholder event in making judgements about the effectiveness of the 3 specific interventions/themes under study. While this design allowed some flexibility across sites, the same types of data were collected from all 3 sites.
2.4 The evaluation had 3 main stages: Stage 1) establishing the baseline; Stage 2) evaluating process and impact; and Stage 3) reflecting on findings for wider policy and practice.
2.5 An initial plan to collect cost information on the test sites had to be scaled down considerably when it became apparent that none of the participant local authorities was gathering the data necessary to conduct a cost-analysis and that there was no standardisation of financial reporting to Scottish Government. Within the parameters of this particular evaluation therefore, it was not possible to produce a cost analysis. However, interviews were conducted with finance officers in the 3 sites at Stages 1 and 2, to discuss their perspectives, especially about how CIPFA (2009) guidelines on introducing 'light touch' monitoring were being implemented by the test sites.
Figure 2.1: Stages of the evaluation
Stage 1 - Baseline
2.6 Various methods were used to gather data to provide a 'baseline' or picture of events prior to implementing the test sites:
- A literature review of definitions of SDS, and the barriers and facilitators to SDS. A separate report is available (Manthorpe et al, 2011);
- Secondary data analysis of national DP and community care statistics ;
- Interviews with local stakeholders in each of the test sites and gathering of local information to provide detail on SDS policy, practice and activity;
- Interviews with national stakeholders in Scottish Government, local government bodies, specialist SDS/ DP bodies, and professional or special interest organisations to assess national policy and the rationale for the test sites;
- Learning sets involving various local stakeholders including project boards, providers, professionals, service users and carers. Although planned, this did not happen in Dumfries & Galloway due to delays in setting up the personalisation board.
Stage 2 - Process and Impact Assessment
2.7 The second phase of data collection gathered data about the processes and impact of implementing SDS within each site. Accepting the broad definition of SDS, the focus of Stage 2 was on capturing information about the extent to which the test sites were delivering SDS options across the whole spectrum. Analysis of information is organised, as far as possible, around the 3 overarching themes (bridging finance; cutting red tape; leadership and training).
2.8 Data collection for Stage 2 consisted of the following 4 key elements:
- Monitoring framework involving collecting quarterly information about outputs (activities and participation) in respect of each test site's action plan, and about service users in the test site and types of SDS options chosen;
- Case studies of 10 service users in each area (30 in total), involving interviews with the service users and/or their carer/relative, and the professional involved in assessment to explore experiences of new processes and procedures implemented during the test sites;
- Learning sets with relevant stakeholders in each test site area to reflect and identify key learning from local experience;
- Key stakeholder interviews with 10-12 individuals at each site (some of whom had been consulted at Stage 1) to understand new processes, the interface with adult protection, and perceptions of impact.
Stage 3 - Reflection on Policy and Practice Implications
2.9 The third and final Stage of the evaluation considered the findings from each test site at an aggregate level to make conclusions that are supported by information and evidence from other studies. The key elements of this Stage were:
- Final learning set in each area focussed on making evaluative judgements in light of the findings of the local evaluation, making an overall judgement about the success of the interventions both locally and in terms of their likely applicability in different locations;
- Evaluation stakeholder event, an event held in March 2011 where findings were presented to a mixed audience of 60 test site stakeholders and representatives from Scottish Government by members of the evaluation team, with discussion around perceived learning from the test sites;
- Examination of all findings in light of other information about implementation of SDS and IBs including consideration of statistical data, other research evidence, and interviews conducted earlier with a sample of local authorities about their experiences of SDS.
2.10 Although not a specific topic for study within the research brief issued by the commissioners, the research team took the opportunity to explore 'adult protection' ( AP) interface with SDS in very general terms through 2 sets of phone interviews with relevant lead officers in the test sites at the beginning and end of the study, as well as consulting individuals representing the perspectives of Scottish Government and national organisations.
2.11 In the main, interviews and focus groups with service users, carers and professionals were digitally recorded and transcribed in full. In some cases, for example, where someone did not wish an interview to be recorded or the interview was with an individual with limited verbal communication, researchers took notes. The majority of group interviews or learning set discussions were noted in writing at the time or recorded on flipcharts.
2.12 Interview and focus group data were analysed using standard qualitative data analysis methods, beginning with the identification of key themes and patterns (Silverman, 1993; Coffey & Atkinson, 1996). The process of identifying themes was driven partly by the research objectives, key issues from the literature, and finally, from the team's interpretations, which were checked for accuracy and validity with the local test sites.
2.13 An evaluation team member acted as key contact and coordinator of data collection and reporting for each of the 3 test sites. The designated evaluation coordinator was responsible for analysing locally derived data, and for writing a local report and agreeing this with the local authority. Other data were coded using NVivo8 (a qualitative data analysis software programme). This report was written by the evaluation managers who retained a general overview of the test site programme (across the 3 sites), drawing on the 3 local evaluation reports. Feedback and comment from the team, the test sites and Scottish Government have been incorporated into this final report.
2.14 Ethical approval for the study was given by the Faculty of Health Ethics Committee at UCLAN. Advice was also taken from NHS West of Scotland Ethics Board on behalf of NRES, who advised NHS ethical approval was not required since the study was an evaluation. We took care not to identify participants and so some identifying features have been changed.
2.15 In practice, the nature of the test site programme presented a number of challenges for the evaluation. First, the pre-test situation was difficult to measure given that at least 2 of the 3 test sites had already begun to make some changes to existing structures when the evaluation was commissioned. Also, their action plans stated that test sites were building upon pre-existing change programmes or pilots (such as the IB pilot in Glasgow). Thus, there was no clear 'before and after' SDS situation to evaluate, except in Highland where a new approach to DPs was implemented.
2.16 The evaluation was also hampered by the relatively short time period (2 years) and the delayed start of the test sites. As the first year was spent developing new systems and approaches, impact and outcomes were only able to be measured in a limited way. As there was less than one year to measure impact, the funders have commissioned further data collection at the end of 2011, and this will be reported on separately.
2.17 Finally, our method of actively involving the sites with the evaluation presented additional challenges as well as advantages. Given the slow start up and its impact on developing the test sites' management infrastructures, it was not possible to involve local stakeholders to the extent originally anticipated. Additionally, some of the local authorities did not have a strong track record in engaging stakeholders such as service users and carers, or the independent sector. These stakeholders had often not been involved in development of test site action plans. The impact of this was that learning sets were unable to be set up as planned in the early part of the evaluation. Nonetheless, local test site project managers and others were consulted regularly by the designated evaluation coordinator for each site before finalising data collection tools, and at all stages of reporting.
2.18 These challenges must be taken into account as they limit the conclusions that can be drawn from the evaluation. To some extent, these will be partially addressed by the proposed supplementary assessment of the local authorities later in 2011 to capture additional impacts beyond the test site evaluation period.
2.19 The next chapter summarises findings from Stage 1 baseline interviews, including why the 3 themes were identified and perceptions of SDS before the test site was confirmed. Chapter 4 then looks at how the test sites defined SDS in practice, which service users accessed SDS packages, and how the test sites addressed the 3 themes. This is followed in Chapter 5 by findings from the case studies, providing an insight into individual service user and carer experiences in the test sites, and identifying key themes. Finally, Chapter 6 draws together key themes from the findings and makes some recommendations.