|Skip Links | Access/General info|
|County South, Lancaster University, Bailrigg, Lancaster LA1 4YD, United
Tel: +44 (0) 1524 592907 E-mail enquiries
The ECB ‘toolkit’ seeks to:
‘Toolkit’ Structure: Ten features of evaluation
The ‘toolkit’ outlines a number of activities which connect to ten ‘features’ of the evaluation process. Although the order and content of each feature is not fixed, for the purposes of linking activities, information, presentations and websites to the ‘toolkit’ we have numbered each feature.
Some features address underpinning principles which are particularly useful for individuals or groups who wish to understand the wider context or appreciate some of the connections between each feature within the evaluation process. Other features focus on practical considerations and include a selection of different activities from which you can select which seems most suitable to your context and the time you have available. The unique context for each HE institution and Aimhigher Partnership means that you may have already covered some of the activities within the initial features.
The ‘toolkit’ will include 5 types of information that will help you assemble your evaluation plan. For most stages / steps of the process there will be additional information and optional activities that some or all of the team involved in producing the plan may want to read or follow up. The core stages / steps in the process are outlined in the section entitled ‘things to do’.
See ECB ‘Toolkit’ Materials for a full list of resources. The initial number of each resource relates to the number of that feature e.g. all Evaluation Audit resources begin with 3. Each resource is then given a letter A, B or C to provide a unique identification label.
This ‘toolkit’ builds on the findings from the HEFCE ‘widening participation capacity building in evaluation’ project undertaken by CSET, Lancaster University. It provides details of the tools, approaches and processes piloted during 10 HE institutions and 2 Aimhigher Partnership consultancies, together with feedback from participants attending capacity building workshops held during January to June 2008.
The importance of localised evaluation to complement a national programme of evaluation is acknowledged by the funding bodies (HEFCE, DIUS and DCSF) who for reasons of accountability, value for money and future policy decisions understandably seek to establish a convincing evidence base that allows them and others to feel confident that there is a reasonable link between widening participation interventions and the awareness, attitudes and actions of individual participants and institutional stakeholders. Evaluation of a complex agenda such as widening participation with its multiple stakeholders, participants and interventions is not a scientific endeavour and consequently is never going to generate a causal proof. At best, evaluation will produce an evidence case akin to that generated in a court of law where evaluators will make a claim for probable and possible connections that others judge is reasonable.
HEFCE have issued explicit guidance for Aimhigher Partnerships to produce an evaluation plan by July 2008, and HE institutions are actively encouraged to develop their own evaluation plans that will enable them to disseminate the evidence emerging from evaluation of their widening participation activity. Each HE institution will engage in a complex set of widening participation activities determined by their history, geography, student profile and institutional strategy. Although widening participation activity covers pre and post arrival activity the current focus for evaluation plans relates to work which may be funded by HEFCE widening participation premium, fees income for outreach identified in institutional Access Agreement as well as funding received from involvement in Aimhigher partnerships or external funding for specific widening participation interventions.
The image of a ‘toolkit’ is open to interpretation and is only one of a host of descriptions for this type of resources. Alternative terms which others have used include: guidance, handbook, manual, and framework. The resources contained within the ‘toolkit’ are available in a host of formats including books, journals, presentations and websites. The idea of a ‘toolkit’ allows the user to select and use the tool according to their context. It is for this reason that there are multiple resources available for most features of the evaluation process. We acknowledge that some prefer to follow a set of instructions, more akin to a ‘how to’ guide. Personal preference will determine if this ‘how to’ guide is viewed as a:
No doubt, other metaphors exist, and those who like working in this way may choose to devise their own. It is, however, important to remember that although evaluation is a creative process and it also needs to be rigorous. When devising this ‘toolkit’ we have attempted to provide both the tools and the guidance about how, when and why to use them. As with any good ‘toolkit’, there will always be new tools needed and new tools becoming available, and inevitably many benefits in sharing tools and offering ideas for how to use them. It is for this reason that resources will be located on an evaluation website, which we hope to build on in the future.
For the purposes of this evaluation ‘toolkit’ we define evaluation
See also the AJ Associates report (Evaluation Toolkit for the Voluntary and Community Arts in N. Ireland 2004, pdf, 303kb) that provides a 1 page summary answering the question ‘what is evaluation?’ that explains the role of questions, evidence, causation, perspectives, reflection and learning with respect to evaluation (p8).
This evaluation ‘toolkit’ contains ten sections. Each presents information, ideas and illustrative examples of a particular feature of the evaluation process. The ‘toolkit’ does not attempt to re-invent the wheel and we make no apologies for directing readers to existing resources that we have used to support ‘thinking about’, ‘planning for’ and ‘doing’ evaluation, as well as materials devised for and used with the HE institutions and Aimhigher Partnerships with whom we have worked during the consultancy.
“Divergent phase” to the “convergent phase” of evaluation planning
The unique context for each HE institution and Aimhigher Partnership means that starting positions will differ and your use of different sections of the ‘toolkit’ will vary according to your starting position. In our experience there appears to be a capacity building journey that HE institutions and Aimhigher Partnerships follow as they begin to develop their evaluation plans. This involves starting small and tentatively, expanding into broader discussion of evaluation as a methodology, taking account of the politics of HE, working out an acceptable and achievable strategy and then returning again to smaller concerns about the nature of particular types of data collection strategies that might be used. It is a case of moving from a “divergent phase” to the “convergent phase”. Not everyone will be interested in all sections of the journey and this will influence their involvement and engagement with the activities.
As with any specialist activity the world of evaluation is awash with technical terminology. We are developing a glossary of common terms that gives a definition for how we have used the word, and where appropriate a reference to other sources of explanation of the concept. It is a useful exercise to build up your own definitions and make sure those with whom you work share your understanding and interpretation. For link to existing glossaries of evaluation and research terminology see QDA Online Glossary.
|| Home | About | Team Members | Resource Toolkit | Contact us ||