Skip Links | Access/General info
Evaluation capacity building in widening participation practice Lancaster University home page
County South, Lancaster University, Bailrigg, Lancaster LA1 4YD, United Kingdom
Tel: +44 (0) 1524 592907 E-mail enquiries
Home >
A Activity I Information P Presentation W Website

Return to 'Toolkit' Structure: Ten features of evaluation

5 Evaluation Perspectives

There are a variety of uses or purposes for undertaking an evaluation including: internal, external, political, pragmatic, policy and practical reasons. Eleanor Chelimskey (Thoughts for a new Evaluation Society, Evaluation, 3 (1), 1997, pp 97-109) distinguishes between what she calls evaluation perspectives. CSET refers to these as ‘uses’ or purposes for evaluation. These evaluation perspectives are:

  • Evaluation for accountability (e.g. measuring results or efficiency)
  • Evaluation for development (e.g. providing evaluative help to strengthen institutions or projects)
  • Evaluation for knowledge (e.g. obtaining a deeper understanding in some specific area or policy field)

Each perspective or purpose requires the evaluator to gather specific types of data and to analyse and discuss the evidence collected for a particular audience. The perspective or purpose of the evaluation influences what is relevant and useful. It is likely that an evaluation plan will include the full range of approaches to evaluation; however, individual evaluations will not always address each evaluation perspective. In other words, each evaluation undertaken will serve, to varying degrees, the needs of different audiences. Using the RUFDATA planning framework allows you to work through the core questions and considerations and ensure that your plan addresses the three main perspectives or purposes of evaluation.

 

To do sign

Things to do

Review the information and decisions made when going through the RUFDATA framework, in particular your answers to the questions about the purpose, use and audience for your evaluation.

Think about the following questions and make sure your plan states the intended outcomes of each evaluation.

Are you gathering enough evidence for the purpose of accountability to HEFCE, other funding bodies, and external stakeholders?

Are you undertaking evaluation that will help you enhance your current provision and develop new activities to address gaps you may have identified in your evaluaton?

How might your evaluation contribute to debates about topical policy or research agenda?

Is your dissemination strategy and communication plan appropriate for the different audiences who may be interested in your evaluation findings?

Discuss with partners / stakeholders who are interested in similar agenda how you might collaborate with respect to evaluation. For instance, are there ways of looking at the same question but from the perspective of different participant groups?

 

P Evaluation Perspectives: Identifying Priorities 5A (pdf slides 310kb; pdf handout 225kb and powerpoint 200 kb)
A presentation of the different evaluative perspectives with a power point document for collecting answers to questions about evaluation for accountability, development and knowledge and linking evaluative perspectives.

back to top of page

Accountability

A range of audiences may be interested in your evaluation, each for their own purpose. Some of the questions you may want to consider include:

  • Who requires the evaluation?
  • When do they require the evaluation?
  • Why do they require the evaluation?
  • What do you think they will do with the evaluation?
  • What do you know they will do with the evaluation?

Development

One of the personally motivating and exciting reasons for evaluation is to learn more about and understand the nature of the work you undertake. Evaluation is a valuable learning process, not only with respect to the impact your work has had on participants, but also in terms of how you work. The developmental perspective of evaluation will often generate insights and illuminate good practice which you may want to share within your institution as well as externally. See section 9 for evaluation dissemination ideas.

back to top of page

Knowledge

Arguably all evaluation activity extends the knowledge of those involved in the evaluation. Evaluation for knowledge refers to ways in which evaluation findings contribute to policy and research agenda. This might be a specific aim for some evaluators, however, it is possible that the potential and actual contribution to knowledge is not the main focus of many Aimhigher Partnerships and HEI widening participation teams. Instead the contribution to knowledge is an additional and perhaps unanticipated outcome. For instance, findings from a targeted evaluation generate evidence that can be submitted in response to a consultation exercise, or the findings in a report are referred to in a literature review. These two examples illustrate a reactive response for evaluation evidence and a more proactive approach that ensure findings are disseminated effectively (see section 9 dissemination).

Connecting and Collaborating evaluative activity

The intention would be that an evaluation plan can build up a larger picture of impact at different levels and over time utilising different kinds of data for different purposes, making connections between different kinds of evaluation (building up ‘layers’ of evidence). Connecting and collaborating evaluative activity can happen between an Aimhigher partnership and local HEI both of whom work together on delivery of activity to the same group of schools or participants. It can also happen between partnerships or HEI who are delivering the same type of activity e.g. mentoring, or an activity to a specific group of participants e.g. parents.

Often, evaluations of complex interventions involve collaborations between many stakeholders to identify key questions. The results of evaluations can help us, as learners and teachers in HE, to:

  • find short-term, provisional, creative solutions to the problems produced by change.
  • they can help us to think more clearly about complex problems and act as a ‘bridging tool’ to enable planning and decisions for future action. We might capture this process as ‘creating provisional stabilities’.
  • uncover evidence, narratives, vignettes and cases about how interventions are being experienced.
  • can provide enough knowledge for onward planning
  • manage a rapidly evolving change, in which evaluation knowledge is provisional, but stable enough for us to make some sense of what we see around us
To do sign

Things to do

NB The activities outlined for individual preparation for evaluation provide a useful framework for identifying areas for collaboration. This may include:

shared data - see evaluation practicalities: section 6;

shared focus for evaluation.

If stakeholders have their own evaluation plans then it is useful to exchange these and arrange for a small task group to review and identify possible areas for collaboration

Successful collaboration may involve:

negotiating the timing of individual evaluations;

identifying duplication and agreeing how to undertake complementary evaluations;

identifying strategies for sharing evaluative activity e.g. data collection instruments;

sharing data.

It is important to ensure the necessary strategic group endorses any decisions about collaborative evaluation. Ideally these should be written into the evaluation plan.

back to top of page

Examples of connecting and collaborating activity

The following organisations may choose to work together on individual evaluations contribute to their evaluation plans. If you have an example of how you are working collaboratively on evaluation then please contact us.

  • Aimhigher
  • Partnerships
  • Employers
  • Further Education Colleges
  • Higher Education Institutions
  • Local Authority
  • Schools

back to top of page

Return to 'Toolkit' Structure: Ten features of evaluation

HEFCE

 

 

Department of Educational Research

Centre for the Study of Education and Training

REAP Research Equity Access and Participation

| Home | About | Team Members | Resource Toolkit | Contact us |