Cover photo of Social Policy Journal

Evidence-Based Policy and Practice: Cross-Sector Lessons from the United Kingdom

Sandra Nutley
Professor of Public Policy and Management

Huw Davies
Professor of Health Care Policy and Management

Isabel Walter
Research Fellow

Abstract

This paper identifies key lessons learnt in the Public Sector quest for policy and practice to become more evidence based.  The Annex to this paper provides outlines of and web links to specific initiatives across the public sector in the United Kingdom.


Introduction

There is nothing new about the idea that policy and practice should be informed by the best available evidence.  Researchers and analysts have long worked with and within government to provide evidence-based policy advice, and the specific character of the relationship between social research and social policy in Britain was shaped in the 19th and 20th centuries (Bulmer 1982).  The 1960s represented a previous high point in the relationship between researchers and policy makers (Bulmer 1986, Finch 1986).  However, during the 1980s and early 1990s there was a distancing and even dismissal of research in many areas of policy, as the doctrine of "conviction politics" held sway. 

In the United Kingdom it was the landslide election of the Labour government in 1997, subsequently returned with a substantial majority in 2001, that revitalised interest in the role of evidence in the policy process.  In setting out its modernising agenda, the government pledged, "We will be forward-looking in developing policies to deliver outcomes that matter, not simply reacting to short-term pressures" (Cm 4310 1999).  The same white paper proposed that being evidence based was one of several core features of effective policy making, a theme developed in subsequent government publications (Performance and Innovation Unit 2001, National Audit Office 2001, Bullock et al. 2001). 

In the wake of this modernising agenda, a wide range of ambitious initiatives have been launched to strengthen the use of evidence in public policy and practice.  A cross-sector review of some of these can be found in the book What Works: Evidence-Based Policy and Practice in Public Services (Davies et al. 2000) and in two special issues of the journal Public Money and Management (Jan 1999, Oct 2000).  In order to give a flavour of the range, scope and aims of these developments, the annex to this paper provides an overview of two generic initiatives and a summary of several sector-specific developments.

This paper seeks to draw out some of the key lessons that have emerged from the experience of trying to ensure that public policy and professional practice are better informed by evidence than has hitherto been the case.  It does this by highlighting four requirements for improving evidence use and considering progress to date in relation to each of these.

Because the use of evidence is just one imperative in effective policy making, and in acknowledgement that policy making itself is always inherently political, a caveat seems appropriate at this point.  Further, as professional practice is also generally contingent on both client needs and local context, warnings are similarly needed in this area also.  The term "evidence-based" when attached as a modifier to policy or practice has become part of the lexicon of academics, policy people, practitioners and even client groups.  Yet such glib terms can obscure the sometimes limited role that evidence can, does, or even should, play.  In recognition of this, we would prefer "evidence-influenced", or even just "evidence-aware", to reflect a more realistic view of what can be achieved.  Nonetheless, we will continue the current practice of referring to "evidence-based policy and practice" (EBPP) as a convenient shorthand for the collection of ideas around this theme, which has risen to prominence over the past two decades.  On encountering this term, we trust the reader will recall our caveat and moderate their expectations accordingly.


Four requirements for improving evidence use in policy and practice

If evidence is to have a greater impact on policy and practice, then four key requirements would seem to be necessary:

  1. agreement as to what counts as evidence in what circumstances;
  2. a strategic approach to the creation of evidence in priority areas, with concomitant systematic efforts to accumulate evidence in the form of robust bodies of knowledge;
  3. effective dissemination of evidence to where it is most needed and the development of effective means of providing wide access to knowledge; and
  4. initiatives to ensure the integration of evidence into policy and encourage the utilisation of evidence in practice.

The remainder of this paper takes each of these areas in turn both to explore diversity across the public sector and to make some tentative suggestions about how the EBPP agenda may be advanced.


The nature of evidence

In addressing the EBPP agenda in 1999, the United Kingdom Government Cabinet Office described evidence as:

Expert knowledge; published research; existing statistics; stakeholder consultations; previous policy evaluations; the Internet; outcomes from consultations; costings of policy options; output from economic and statistical modelling. (Strategic Policy Making Team 1999)

This broad and eclectic definition clearly positions research-based evidence as just one source amongst many, and explicitly includes informal knowledge gained from work experience or service use:

There is a great deal of critical evidence held in the minds of both front-line staff … and those to whom policy is directed. (ibid.)

Such eclecticism, whilst inclusive and serving to bring to the fore hitherto neglected voices such as those of service users, also introduces the problems of selection, assessment and prioritising of evidence.  A survey of policy making in 2001 (Bullock et al. 2001) found that a more limited range of evidence appeared to be used by government departments: domestic and international research and statistics, policy evaluation, economic modelling and expert knowledge. 

It is instructive that egalitarianism in sources of evidence is not present equally in all parts of the public sector.  Health care, for example, has an established "hierarchy of evidence" for assessing what works.  This places randomised experiments (or, even better, systematic reviews of these) at the apex; observational studies and professional consensus are accorded much lower credibility (Hadorn et al. 1996, Davies and Nutley 1999).  This explicit ranking has arisen for two reasons.  First, in health care there is a clear focus on providing evidence of efficacy or effectiveness: which technologies or other interventions are able to bring about desired outcomes for different patient groups.  The fact that what counts as "desired outcomes" is readily understood (i.e. reductions in mortality and morbidity, and improvements in quality of life) greatly simplifies the methodological choices.  The second reason for such an explicit methodological hierarchy lies in bitter experience: much empirical research suggests that biased conclusions may be drawn about treatment effectiveness from the less methodologically rigorous approaches (Schulz et al. 1995, Kunz and Oxman 1998, Moher et al. 1998).

In contrast to the hierarchical approach in health care, other sector areas (such as education, criminal justice and social care) are riven with disputes as to what constitutes appropriate evidence.  Also, there is relatively little experimentation (especially compared with health care), and divisions between qualitative and quantitative paradigms run deep (Davies et al. 2000).  This happens in part because of the more diverse and eclectic social science underpinnings in these sectors (in comparison to the natural sciences underpinning in much of health care), and in part because of the multiple and contested nature of the outcomes sought.  Thus knowledge of "what works" tends to be influenced greatly by the kinds of questions asked, and is, in any case, largely provisional and highly dependent on context.

Randomised experiments can answer the pragmatic question of whether intervention A provided, in aggregate, better outcomes than intervention B in the sampled population.  However, such experiments do not answer the more testing question of whether and what aspects of interventions are causally responsible for a prescribed set of outcomes.  This may not matter if interventions occur in stable settings where human agency plays a small part (as is the case for some medical technologies), but in other circumstances there are dangers in generalising from experimental to other contexts. 

Theory-based evaluation methods often seem to hold more promise, because of the way in which they seek to unravel the causal mechanisms that make interventions effective in context, but they too face limitations.  This is because theory-based evaluation presents significant challenges in terms of articulating theoretical assumptions and hypotheses, measuring changes and effects, and developing appropriate tests of assumptions and hypotheses (Weiss 1995, Sanderson 2002).

These observations suggest that if we are indeed interested in developing an agenda where evidence is more influential then, first of all, we need to develop some agreement as to what constitutes evidence, in what context, for addressing different types of policy and practice questions.  This will involve being more explicit about the role of research vis-à-vis other sources of information, as well as a greater clarity about the relative strengths and weaknesses of different methodological stances.  Such methodological development needs to emphasise a "horses for courses" approach, identifying which policy and practice questions are amenable to analysis through what kinds of specific research techniques.  Further, it needs to emphasise methodological pluralism, rather than continuing paradigmatic antagonisms; seeking complementary contributions from different research designs, rather than epistemological competition.  The many stakeholders within given service areas (e.g. policy makers, research commissioners, research contractors and service practitioners) will need to come together and seek broad agreement over these issues if research findings are to have wider impact beyond devoted camps.  One initiative within social care to tackle such an agenda is outlined in Box 1.


A strategic approach to knowledge creation

Whichever part of the public sector one is concerned with, one observation is clear: the current state of research-based knowledge is insufficient to inform many areas of policy and practice.  There remain large gaps and ambiguities in the knowledge base, and the research literature is dominated by small, ad hoc studies, often diverse in approach, and of dubious methodological quality.  In consequence, there is little accumulation from this research of a robust knowledge base on which policy makers and practitioners can draw.  Furthermore, additions to the research literature are more usually driven by the research-producer rather than led by the needs of the research users.

Recognition of these problems has led to many attempts to develop research and development (R&D) strategies to address these problems.  Government departments have generally taken the lead in developing research strategies for specific policy areas.  These not only seek to structure the research that is funded directly by government, but also aim to influence research funded by non-governmental bodies.  For example, in the United Kingdom the Department for Education and Skills has played a leading role in establishing the National Educational Research Forum (NERF), which brings together researchers, funders and users of research evidence.  NERF has been charged with responsibility for developing a strategic framework for research in education, including: identifying research priorities, building research capacity, co-ordinating research funding, establishing criteria for the quality of research and considering how to improve the impact of research.

Developing such strategies necessarily requires addressing a number of key issues.

  • What approaches can be used to identify gaps in current knowledge provision, and how should such gaps be prioritised?
  • How should research be commissioned (and subsequently managed) to fill identified gaps in knowledge?
  • What is an appropriate balance between new primary research and the exploitation of existing research through secondary analysis?
  • What research designs are appropriate for specific research questions, and what are the methodological characteristics of robust research?
  • How can the need for rigour be balanced with the need for timely findings of practical relevance?
  • How can research capacity be developed to allow a rapid increase in the availability of research-based information?
  • How are the tensions managed between the desirability of "independent" researchers free from the more overt political contamination, and the need for close cooperation (bordering on dependence) between research users and research providers?
  • How should research findings be communicated, and, more importantly, how can research users be engaged with the research production process to ensure more ready application of its findings?

Tackling these issues is the role of effective R&D strategies, but gaining consensus or even widespread agreement will not be easy.  The need to secure some common ground between diverse stakeholders does, however, point the way to more positive approaches.  The traditional separation between the policy arena, practitioner communities and the research community has largely proven unhelpful.  Recent thinking emphasises the need for partnerships if common ground is to be found (Laycock 2000, Nutley et al. 2000).


Effective dissemination and wide access

Given the dispersed and ambiguous nature of the existing evidence base, a key challenge is how to improve access to robust bodies of knowledge.  Much of the activity built around supporting the EBPP agenda has focused on searching for, synthesising, and then disseminating current best knowledge from research.  Thus the production of systematic reviews has been a core activity of such organisations as the Cochrane Collaboration (health care), the Campbell Collaboration (broader social policy, most notably criminal justice), the NHS Centre for Reviews & Dissemination (health care again), and the Evidence for Policy and Practice Information (EPPI) Centre (education).  Despite such activity, a major barrier to review efforts is the significant cost involved in undertaking a systematic review - estimated at around UK£51,000 per review (Gough 2001).

Systematic reviews seek to identify all existing research studies relevant to a given evaluation issue, assess them for methodological quality and produce a synthesis based on the studies considered to be relevant and robust.  The methodology for undertaking such reviews has been developed in the context of judging the effectiveness of medical interventions and tends to focus on the synthesis of quantitative (particularly experimental) data.  This leads to questions about the extent to which such an approach can or should be transferred to other areas of public policy (Boaz et al. 2002).  There are examples of systematic review activity outside of clinical care (see Box 2), but concern remains that the approach needs to be developed in order to establish successful ways of:

  • involving users in defining problems and questions;
  • incorporating a broader range of types of research in reviews; and
  • reviewing complex issues, interventions and outcomes.

One of the aims of the ESRC's EvidenceNetwork is to contribute to methodological developments on these issues (Boaz et al. 2002) and its work to date has included exploration of a "realistic" approach to synthesis (Pawson 2002a, 2002b).

Whether the focus is on primary research or on the systematic review of existing studies, a key issue is how to communicate findings to those who need to know about them.  The strategies used to get research and review findings to where they can be utilised involve both dissemination (pushing information from the centre outwards) and provision of access (web-based and other repositories of information that research users can tap into). 

Much effort has gone into improving the dissemination process and good practice guidance abounds (see Box 3 for one example).  This has developed our appreciation of the fact that dissemination is not a single or simple process - different messages may be required for different audiences at different times.  It appears that the promulgation of individual research findings may be less appropriate than distilling and sharing pre-digested research summaries.  Evidence to date also suggests that multiple channels of communication - horizontal as well as vertical, and networks as well as hierarchies - may need to be developed in parallel (Nutley and Davies 2000).

Despite improvements in our knowledge about effective dissemination, one main lesson has emerged from all of this activity.  This is that pushing information from the centre out is insufficient and often ineffective: we also need to develop strategies that encourage a "pull" for information from potential end users.  By moving our conceptualisations of this stage of the EBPP agenda away from ideas of passive dissemination and towards much more active and holistic change strategies, we may do much to overcome the often disappointing impact of evidence seen so far (Nutley et al. 2000).


Initiatives to increase the uptake of evidence

Increasing the uptake of evidence in both policy and practice has become a preoccupation for both policy people and service delivery organisations.  The primary concern for those wishing to improve the utilisation of research and other evidence is how to tackle the problem of underuse, where findings about effectiveness are either not applied, or are not applied successfully.  However, concerns have also been raised about overuse, such as the rapid spread of tentative findings, and about misuse, especially where evidence of effectiveness is ambiguous (Walshe and Rundall 2001).  The introduction to this paper referred to a myriad of initiatives aimed at improving the level of evidence use in public policy and professional practice.  This section focuses attention on the integration of evidence into policy, and it also includes a few words on ways of getting evidence to inform professional practice.

United Kingdom Government reports aimed at improving the process by which policy is made set out a number of recommendations for increasing evidence use (see Box 4).  These include mechanisms to increase the "pull" for evidence, such as requiring spending bids to be supported by an analysis of the existing evidence base, and mechanisms to facilitate evidence use, such as integrating analytical staff at all stages of the policy development process.  The need to improve the dialogue between policy makers and the research community is a recurring theme within government reports.  It is sensible that such dialogues should not be constrained by one policy issue or one research project.  This raises questions about what the ongoing relationship between policy makers and external researchers should be.  Using the analogy of personal relationships, it has been suggested that promiscuity, monogamy and bigamy should all be avoided.  Instead, polygamy is recommended, where policy makers consciously and openly build stable relationships with a number of partners who each offer something different, know of each other and can understand and respect the need to spread oneself around (Solesbury 1999). 

One of the problems with many of the recommendations about how to improve evidence use in the policy-making process is that they are rarely themselves evidence based.  Or, if there is evidence to support a particular course of action, this evidence is rarely cited.  Although Better Policy Making (Bullock et al. 2001) provides 130 examples of good practice from a diverse range of departments, initiatives and policy areas, these are not necessarily representative or evaluated.  Instead they aim to illustrate professional, interesting and innovative approaches to modernising policy making.  If we look elsewhere, there is some empirical research that has identified a number of circumstances when research evidence is more likely to be incorporated into policy (Box 5).  This list of circumstances should serve to remind us that while it is tempting to think of evidence entering the policy process as part of a rational decision-making process, reality is often far more messy and overtly political than this.  There are at least four ways in which evidence might influence policy (see Box 6) and the instrumental use of research is in fact quite rare (Weiss 1980).  It is most likely where the research findings are non-controversial, require only limited change and will be implemented within a supportive environment: in other words, when they do not upset the status quo (Weiss 1998).

Once we acknowledge that evidence is used in various ways by different people in the policy process, government does not appear to be the "evidence-free zone" that is sometimes depicted.  The evidence that is used is wide ranging.  Policy makers need information, not only about the effectiveness of a procedure and the relationship between the risks and the benefits, but also about its acceptability to key constituencies, its ease and cost of implementation.  They use information in the way that they do because the central challenge is not just to exercise sound technical judgement, but to "conciliate between all the interests and institutions of the society, and between the interests and institutions represented in the policy-making process" (Perri 6 2002:4).  The quest for evidence-based policy should not, it is argued, be driven by a desire to reduce policy making to technical analysis; accommodating divergence rather than imposing convergence appears to be the key to a well-functioning democratic polity (Perri 6 2002). 

Analyses of the policy process have demonstrated the importance of looking beyond formal government institutions in order to consider policy networks - the patterns of formal and informal relationships that shape policy agendas and decision making.  One of the main ways by which research evidence becomes known and is discussed within policy networks is through the process of advocacy (Sabatier and Jenkins-Smith 1993).  Around many major issues there have evolved groups of people who have long-term interests and knowledge in shaping policy.  These interest groups are important purveyors of data and analysis: "It is not done in the interests of knowledge, but as a side effect of advocacy" (Weiss 1987:278).  Concepts such as policy networks and policy communities highlight the potential for a different vision of how the policy process might be improved to encourage greater research use; one that is more focused on "democratising" the policy process as opposed to "modernising" it (Parsons 2002).

Policy making is ultimately about delivering outcomes - "desired change in the real world" (Cm 4310 1999).  Turning policy into concrete action in pursuit of policy goals has focused attention on the implementation of policy at the multiple points of contact between service users and public service provision.  Thus in parallel with a renewed emphasis on evidence at a policy level, there has been the development of a similar set of concerns at practice level.  Within health care there has been extensive examination of what works in achieving practitioner change.  The Effective Practice and Organisation of Care Group (EPOC), part of the Cochrane Collaboration, has undertaken systematic reviews of a range of interventions designed to improve the practice of individual health care professionals.  A summary of findings from these (NHS Centre for Reviews and Dissemination 1999) suggests that the passive dissemination of educational information and traditional continuing professional development approaches are generally ineffective.  Many other interventions were found to be of variable effectiveness, including audit and feedback, opinion leaders, interactive continuing professional development, local consensus models and patient-mediated interventions.  More positively, financial incentives, educational outreach visits and reminder systems were found to be consistently effective.  Most importantly, the most effective strategies were multi-faceted and explicitly targeted identified barriers to change.

A key message to emerge from reviews of practices to increase research impact (NHS Centre for Reviews and Dissemination 1999, Walter et al. 2003) is that the change process must reflect and be tailored to the complex nature of research implementation.  Interventions need to develop and be guided by a "diagnostic analysis" that identifies factors likely to influence the proposed change.  This acknowledges that nothing works all the time, and emphasises the importance of the local circumstances that mediate implementation strategies.  There is also increasing recognition of the role of the wider organisational and systemic contexts within which evidence is used: practitioners do not and cannot work in isolation.

Overall, a striking feature of the existing literature on ways of improving the uptake of evidence in both policy and practice is the common conclusion that the way forward should be to develop better, ongoing interaction between evidence providers and evidence users (Nutley et al. 2002).  This echoes Huberman's (1987) call for "sustained interactivity" between researchers and practitioners throughout the process of research, from the definition of the problem to the application of findings.  Closer and more integrated working over prolonged periods would seem to be capable of fostering cross-boundary understandings.  Doing so, however, is not cheap or organisationally straightforward, and it raises some serious concerns about independence and impartiality.  Nonetheless, examples of successful development of policy from suggestive evidence, policy that is then seen through to practice change and beneficial outcomes, often display an unusual degree of partnership (see Box 7). 


Conclusions

This overview of cross-sector experience has identified some progress, but also many outstanding challenges facing EBPP in public services in the United Kingdom.  A number of lessons emerge in relation to our four requirements for improving evidence use in policy and practice and these are summarised below.

The Nature of Evidence

  • Research is only one source of evidence for public policy and practice.
  • Agreement as to what counts as evidence should emphasise a "horses for courses" approach.  "Ways and means" matrices - ways of understanding related to the most appropriate means for achieving each kind of understanding - are likely to be more beneficial in the long run than simple hierarchies of evidence.

A Strategic Approach to Knowledge Creation

  • Stakeholder involvement in the creation of wide-ranging R&D strategies is crucial.
  • Such strategies need to address capacity building as well as priority areas for future research.

Effective Dissemination and Wide Access

  • Systematic reviews have the potential to increase access to robust bodies of knowledge but the cost of such reviews and the need for further methodological development in this area are barriers to progress.
  • We know much about the features of effective dissemination but even good dissemination has its limits - "pushing" evidence out is not enough, there is also a need to develop the "pull" for evidence from potential end users.

Increasing the Uptake of Evidence

  • Uptake needs to be defined broadly - there are many ways in which evidence might be utilised appropriately.
  • There are a myriad of initiatives aimed at increasing the use of evidence in policy and practice but there is little systematic evidence on the effectiveness of these.
  • Tentative evidence suggests that multi-faceted strategies that explicitly target barriers to change work best.
  • Partnerships models, which encourage ongoing interaction between evidence providers and evidence users, may be the way forward.

The key theme that emerges is that simple and unproblematic models of EBPP - where evidence is created by research experts and drawn on as necessary by policy makers and practitioners - fail as either accurate descriptions or effective prescriptions.  The relationships between research, knowledge, policy and practice are always likely to remain loose, shifting and contingent.

The vision should be to create a society where analysts and experts are "on tap but not on top" - a society that is active in its self-critical use of knowledge and social science (Etzioni 1968, 1993).  In such a vision, research evidence may well be used as a political weapon but "when research is available to all participants in the policy process, research as political ammunition can be a worthy model of utilisation" (Weiss 1979).  Of course, a problem arises when certain groups in society do not have access to research and other evidence, and, even if they did, their ability to use this evidence is restricted due to their exclusion from the networks that shape policy decisions.

Recent developments in the provision of evidence over the Internet may encourage more open debates that are not confined to those operating in traditional expert domains.  Similarly, the establishment of intermediary bodies (such as NICE and SCIE - see Annex) to digest existing evidence may facilitate the opening up of evidence-based policy and practice debates.

An optimistic scenario for the future is that initiatives that encourage consultation, through devices like policy action teams and service planning forums, will widen the membership of policy and practice communities.  The involvement of wider interests in these teams is likely to set a different agenda and lead to a more practice-based view of policy and delivery options.  The use of research and other evidence under such a scenario is likely to be diffuse.  To operate effectively within such a scenario, policy makers and service planners will require a broad range of skills and developing appropriate analytical skills may be the least of their worries.


Box 1: types and quality of knowledge in social care

In 2002 the Social Care Institute for Excellence (SCIE) commissioned a research project to:

  • Identify and classify types of social care knowledge (Stage 1), and
  • Develop ways of assessing their quality that will be acceptable to a wide spectrum of sometimes conflicting opinion (Stage 2).

Stage 1 of the research project recommends a classification system based on five different sources of social care knowledge: organisational knowledge, practitioner knowledge, user knowledge, research knowledge and policy community knowledge

Stage 2 of the project has yet to be completed, but a provisional quality assessment framework has been proposed, based on six generic quality standards (transparency, accuracy, purposivity, utility, propriety and accessibility) and commentaries on existing or potential standards for each of the five knowledge sources identified in Stage 1.

The project involved significant participation by service users, practitioners and other experts in the production of the classification framework and the quality standards.


Box 2: examples of systematic reviews in the "real world"

Review question

Methods

Authors’ conclusions

Does spending more money on schools improve educational outcomes?

Meta-analysis of effect sizes from 38 publications

Systematic positive relation between resources and student outcomes

Do women or men make better leaders?

Review of organisational and laboratory experimental studies of relative effectiveness of women and men in leadership and managerial roles

Aggregated over organisational and laboratory experimental studies in sample, male and female leaders were equally effective.

Does the sexual orientation of the parent matter?

Review investigating impact that having homosexual as opposed to heterosexual parents has on emotional wellbeing and sexual orientation of the child

Results show no differences between heterosexual and homosexual parents in terms of parenting styles, emotional adjustment, and sexual orientation of the child.

Are fathers more likely than mothers to treat their sons and daughters differently?

Review of 39 published studies

Fathers’ treatment of boys and girls differed most in areas of discipline and physical involvement and least in affection or everyday speech.  Few differences for mothers

Is job absenteeism an indicator of job dissatisfaction?

Review of 23 research studies

Yes; stronger association was observed between job satisfaction and frequency of absence than between satisfaction and duration of absence.

Are jurors influenced by defendants’ race?

Meta-analytic review of experimental studies

Results are consistent in finding that race influences sentencing decisions.

Is there a relation between poverty, income inequality and violence?

Review of 34 studies reporting on violent crime, poverty, and income inequality

Results suggest that homicide and assault may be more closely associated with poverty or income inequality than rape or robbery.

Source: Petticrew 2001.


Box 3: improving dissemination

Recommendations for research commissioners

Recommendations for researchers

  • Phase research to deliver timely answers to specific questions facing practitioners and policy makers.
  • Ensure relevance to current policy agenda.
  • Allocate dedicated dissemination and development resources within research funding.
  • Include a clear dissemination strategy at the outset.
  • Involve professional researchers in the commissioning process.
  • Involve service users in the research process.
  • Commission research reviews to synthesise and evaluate research.
  • Provide accessible summaries of research.
  • Keep the research report brief and concise.
  • Publish in journals or publications that are user friendly.
  • Use language and styles of presentation that engage interest.
  • Target material to the needs of the audience.
  • Extract the policy and practice implications of research.
  • Tailor dissemination events to the target audience and evaluate them.
  • Use a combination of dissemination methods.
  • Use the media.
  • Be proactive and contact relevant policy and delivery agencies.
  • Understand the external factors likely to affect the uptake of research.

Source: Abstracted from Barnardo’s R&D Team 2000.


Box 4: encouraging better use of evidence in policy making

 

Increasing the pull for evidence

Facilitating better evidence use

  • Require the publication of the evidence base for policy decisions
  • Require departmental spending bids to provide a supporting evidence base
  • Submit government analysis (such as forecasting models) to external expert scrutiny
  • Provide open access to information – leading to more informed citizens and pressure groups.
  • Encourage better collaboration across internal analytical services (e.g.  researchers, statisticians and economists)
  • Co-locate policy makers and internal analysts
  • Integrate analytical staff at all stages of the policy development process
  • Link R&D strategies to departmental business plans
  • Cast external researchers more as partners than as contractors 
  • Second more university staff into government
  • Train staff in evidence use

Source: Abstracted from Performance and Innovation Unit 2000, Bullock et al. 2001.


Box 5: evidence into policy

Attention is more likely to be paid to research findings when:

  • The research is timely, the evidence is clear and relevant, and the methodology is relatively uncontested. 
  • The results support existing ideologies, are convenient and uncontentious to the powerful.
  • Policy makers believe in evidence as an important counterbalance to expert opinion and act accordingly.
  • The research findings have strong advocates. 
  • Research users are partners in the generation of evidence.
  • The results are robust in implementation.  
  • Implementation is reversible if need be.

Source: Adapted and extended from Finch 1986, Rogers 1995, Weiss 1998


Box 6: types of research utilisation

1. Instrumental use
Research feeds directly into decision-making for policy and practice.

2. Conceptual use
Even if policy makers or practitioners are blocked from using findings, research can change their understanding of a situation, provide new ways of thinking, and offer insights into the strengths and weaknesses of particular courses of action.  New conceptual understandings can then sometimes be used in instrumental ways.

3. Mobilisation of support
Here, research becomes an instrument of persuasion.  Findings - or simply the act of research - can be used as a political tool and legitimate particular courses of action or inaction.

4. Wider influence
Research can have an influence beyond the institutions and events being studied.  Evidence may be synthesised.  It might come into currency through networks of practitioners and researchers, and alter policy paradigms or belief communities.  This kind of influence is both rare and hard to achieve, but research adds to the accumulation of knowledge that ultimately contributes to large-scale shifts in thinking and, sometimes, action.

Source: Adapted from Weiss 1998.


Box 7: getting evidence into policy & practice: repeat victimisation

Gloria Laycock, now Director of the Jill Dando Institute of Crime Science, has described how, when she was in the Home Office, a remit was given to researchers to "find an area with a high burglary rate, make it go down, and tell us how you did it".  An inter-agency project team was brought together of academics, police, probation staff and others.  Their analysis showed that there was a great deal of "repeat victimisation": if a house had been burgled there was a significantly higher risk that it would be burgled again.  This led the team to focus on victims as a way of reducing crime.  By a variety of means they protected victims in a demonstration project, and reduced repeat victimisation to zero in seven months.  The burglary rate in this demonstration area overall fell by 75% over the following three years.

Alongside the ongoing development of the evidence base, a key challenge was to use the emerging research findings to inform crime prevention policy and practice more generally.  Policy development was enabled by the co-location of the police research and policy teams in the Home Office.  Researchers were able to develop close working relationships with policy makers and had direct access to high-level ministers, rather than operating at arms length through policy brokers.  Home Office researchers were also given the responsibility for policy implementation around repeat victimisation and they worked closely with police officers on the ground on this issue.  Ultimately, repeat victimisation was adopted as one of the police performance indicators for the prevention of crime, and the original research was seen to have had significant impact. 

Source: Laycock 2001.


Annex: some evidence-based policy and practice initiatives in the United Kingdom

Two Generic Initiatives to Enhance the Use of Evidence in Policy Making

The Centre for Management and Policy Studies (now part of the Strategy Unit within the Cabinet Office) was given the task of promoting practical strategies for evidence-based policy-making, which it took forward through:

  • the development of "knowledge pools" to promote effective sharing of information;
  • training officials in how to interpret, use and apply evidence;
  • a policy hub website providing access to knowledge pools, training programmes and government departments' research programmes (http://policyhub.gov.uk/default.jsp); and
  • implementing the findings of Adding It Up with the Treasury, including organising placements to bring academics into Whitehall to carry out research projects.

The ESRC United Kingdom Centre for Evidence-Based Policy and Practice is an initiative funded by the Economic and Social Research Council.  The Centre, together with an associated network of university centres of excellence, is intended to foster the exchange of social science research between policy, researchers and practitioners.  It aims to:

  • improve the accessibility, quality and usefulness of research;
  • develop methods of appraising and summarising research relevant to policy and practice; and
  • inform and advise those in policy making roles, through its dissemination function.

An Overview of Evidence-Based Policy and Practice Initiatives in Four Sectors

Criminal justice
The criminal justice field is mainly characterised by systematic/ top-down approaches to getting evidence into practice.  For example, the "What Works" initiative within the Probation Service of England and Wales has taken lessons from small-scale programmes and wider research and applied them to redesign the whole system of offender supervision (HM Inspectorate of Probation 1998, Furniss and Nutley 2000).  Similarly, in 1999 the Home Office launched the Crime Reduction Programme, which represents a major investment by the government in an evidence-based strategy to pilot new ways of tackling crime.  The overall objectives of the programme were to:

  • achieve long-term and sustained reduction in crime through implementing "What Works" and promoting innovation into mainstream practice;
  • generate a significant improvement in the crime-reduction knowledge-base; and
  • deliver real savings through the reduction of crime and improved programme efficiency and effectiveness.

Education
Since the late 1990s a somewhat bewildering array of dispersed initiatives have been launched to improve the supply, accessibility and uptake of research evidence in education.  These include initiatives to:

  • develop a research and development strategy - the National Education Research Forum;
  • increase the evidence base for education, such as the ESRC's Teaching and Learning Research Programme (http://www.tlrp.org);
  • systematically review the existing evidence base - the Evidence for Policy and Practice Information (EPPI) Centre (http://eppi.ioe.ac.uk), which has completed the first wave of reviews (in teaching English, leadership, inclusion, gender, further education and assessment); and
  • encourage teacher use of research evidence, such as those sponsored by the Teacher Training Agency (http://www.canteach.gov.uk).

Health Care
The arrival of the first NHS research and development strategy in 1991 (Peckham 1991) represented a major shift in the approach to research in healthcare.  It aimed to ensure that research funding was directed to areas of agreed need and was focused on robust research designs.  Subsequent initiatives have sought to:

  • systematically review the existing evidence base - such as the Cochrane Collaboration (http://www.cochrane.org) and the NHS Centre for Reviews and Dissemination (http://www.york.ac.uk/inst/crd);
  • collate and disseminate evidence on effectiveness - for example the clinical and practice guidelines  promulgated by the Royal Colleges (http://sign.ac.uk);
  • provide robust and reliable (prescriptive) guidance on current best practice - such as the government sponsored National Institute for Clinical Excellence (NICE - http://www.nice.org.uk), which reviews evidence on the effectiveness and cost-effectiveness of health technologies;
  • establish review structures to ensure that practice is informed by evidence - such as clinical audit and clinical governance actitivities (http://www.doh.gov.uk/clinicalgovernance); and
  • change individual clinician behaviour via a new approach to clinical problem solving - this has been the specific aim of the Evidence-Based Medicine movement.

Social Care
Until recently initiatives in social care have been somewhat fragmented and localised.  These include:

  • The Centre for Evidence-Based Social Services, based at the University of Exeter and working with a group of local authorities in the South West of England (http://www.ex.ac.uk/cebss);
  • Research in Practice, an initiative to disseminate childcare research to childcare practitioners and to enable them to use it (http://www.rip.org.uk); and
  • Barnardo's and "What Works" - Barnardo's is the United Kingdom's largest childcare charity.  It has produced a series of overviews of evidence on effective interventions relevant to children's lives and has sought to ensure that its own practice is evidence based (see http://www.barnardos.org.uk).

More recently, a government-sponsored Social Care Institute for Excellence (SCIE) has been established.  It aims to improve the quality and consistency of social care practice and provision by creating, disseminating and working to implement best practice guidance in social care through a partnership approach.  It forms part of the government's Quality Strategy for Social Care but has operational independence (see http://www.scie.org.uk).


References

Barnardo's R&D Team (2000) "What works? Making connections: linking research and practice" Barnardo's, (accessed 1 May 2003)

Boaz, A., D. Ashby and K. Young (2002) "Systematic reviews: what have they got to offer evidence based policy and practice?" Working Paper 2, ESRC UK Centre for Evidence Based Policy and Practice, Queen Mary, University of London, (accessed 1 May 2003).

Bullock, H., J. Mountford and R. Stanley (2001) Better Policy-Making, Cabinet Office, Centre for Management and Policy Studies, London (accessed 7 June 2002).

Bulmer, M. (1982) The Uses of Social Research, Allen and Unwin, London.

Bulmer, M. (1986) Social Science and Policy, Allen and Unwin, London.

Cm 4310 (1999) Modernising Government White Paper (London, The Stationery Office) (accessed 7 June 2002).

Davies, H.T.O. and S.M. Nutley (1999) "The rise and rise of evidence in health care" Public Money & Management 19(1): 9-16.

Davies, H.T.O., S.M. Nutley and P.C. Smith (eds.) (2000) What Works? Evidence-Based Policy and Practice in Public Services, The Policy Press, Bristol.

Etzioni, A. (1968) The Active Society: A Theory of Societal and Political Processes, Free Press, New York.

Etzioni, A. (1993) The Spirit of Community: Rights, Responsibilities and the Communitarian Agenda, Crown Publishers, New York.

Finch, J. (1986) Research and Policy: The Use of Qualitative Methods in Social and Educational Research (London, The Falmer Press).

Furniss, J. and Nutley, S. M. (2000) "Implementing what works with offenders - the Effective Practice Initiative" Public Money and Management, 20 (4), 23-28

Gough, D. (2001) personal communication cited in Boaz et al. (2002) "Systematic reviews: what have they got to offer evidence based policy and practice?" Working Paper 2, ESRC UK Centre for Evidence Based Policy and Practice, Queen Mary, University of London

Hadorn, D.C., D. Baker, et al. (1996) "Rating the quality of evidence for clinical practice guidelines" Journal of Clinical Epidemiology 49:749-54.

HM Inspectorate of Probation (HMIP) (1998) Strategies for Effective Offender Supervision: Report of the HMIP What Works Project, Home Office, London.

Homel, P., B. Webb, S.M. Nutley and N. Tilley (forthcoming) Investing to Deliver: Reviewing the Implementation of the Crime Reduction Programme, Home Office, London.

Huberman, M (1987) "Steps toward an integrated model of research utilization" Knowledge, June:586-611

Kunz, R. and A.D. Oxman (1998) "The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials" BMJ (Clinical Research Ed.) 317(7167):1185-90.

Laycock, G. (2000) "From central research to local practice: identifying and addressing repeat victimisation" Public Money and Management 20(4): 17-22.

Laycock, G. (2001) "Hypothesis-based research: the repeat victimization story" Criminal Justice, 1(1):59-82.

Moher, D., B. Pham et al. (1998) "Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?" Lancet 352(9128): 609-13.

National Audit Office (2001) Modern Policy-Making: Ensuring Policies Deliver Value for Money (London: The Stationery Office) (accessed 7 June 2002)

NHS Centre for Reviews and Dissemination (1999) "Getting evidence into practice" Effective Health Care Bulletin, 5(1) February, The Royal Society of Medicine Press, London.

Nutley, S.M. and H.T.O. Davies (2000) "Making a reality of evidence-based practice: some lessons from the diffusion of innovations" Public Money and Management, 20(4):35-42.

Nutley, S.M., H.T.O. Davies and N. Tilley (2000) "Getting research into practice" Public Money & Management, 20(4):3-6.

Nutley, S. M., H.T.O. Davies and I. Walter (2002) "From knowing to doing: a framework for understanding the evidence-into-practice agenda" Discussion Paper 1, Research Unit for Research Utilisation, St Andrews, due to be published in Evaluation in 2003.

Parsons, W. (2002) "From muddling through to muddling up - Evidence based policy making and the modernisation of British Government" Public Policy and Administration, 17(3):43-60.

Pawson, R. (2002a) "Evidence based policy: in search of a method" Evaluation, 8(2):157-181.

Pawson, R. (2002b) "Evidence based policy: the promise of 'realist synthesis'" Evaluation 8(3):340-358.

Peckham, M. (1991) "Research and development for the National Health Service" Lancet 338:163-70.

Performance and Innovation Unit (2000) Adding It Up: Improving Analysis and Modelling in Central Government, Cabinet Office, London (accessed 7 June 2002).

Performance and Innovation Unit (2001) Better Policy Delivery and Design: A Discussion Paper, Cabinet Office, London (accessed 7 June 2002).

Perri 6 (2002) "Can policy making be evidence-based?" MCC: Building Knowledge for Integrated Care, 10(1):3-8.

Petticrew, M. (2001) "Systematic reviews from astronomy to zoology: myths and misconceptions" BMJ, 322(7278):98-107.

Rogers, E. M. (1995) Diffusion of Innovations (New York, Free Press).

Sabatier, P.A. and H.C. Jenkins-Smith (eds.) (1993) Policy Change and Learning: An Advocacy Coalition Approach, Westview Press, Bouldner, Colorado.

Sanderson, I. (2002) "Making sense of 'what works': evidence based policy making as instrumental rationality" Public Policy and Administration, 17(3):61-75.

Schulz, K.F., I. Chalmers et al. (1995) "Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials" Journal of the American Medical Association, 273:408-12.

Solesbury, W. (1999) "Research and Policy", seminar presentation, available from the author at ESRC UK Centre for Evidence-Based Policy and Practice, Queen Mary College, University of London.

Strategic Policy Making Team (1999) Professional Policy Making for the Twenty First Century, Cabinet Office, London (accessed 7 June 2002).

Walter, I., S.M. Nutley and H.T.O. Davies (2003) Research Impact: A Cross Sector Review, Research Unit for Research Utilisation, University of St Andrews, (accessed 1 May 2003).

Walshe, K. and T.G. Rundall (2001) "Evidence-base management: from theory to practice in health care" The Millbank Quarterly, 79(3):429-457.

Weiss, C.H. (1979) "The many meanings of research utilisation" Public Administration Review, 39(5):426-31.

Weiss, C.H. (1980) "Knowledge creep and decision accretion" Knowledge: Creation, Diffusion and Utilisation, 1(3):381-404.

Weiss C.H. (1987) "The circuitry of enlightenment" Knowledge: Creation, Diffusion, Utilisation, 1(3):381-404.

Weiss C H (1995) "Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families" in J.P. Connell, A.C. Kubisch, L.B. Schorr and C.H. Weiss (eds.) New Approaches to Evaluating Community Initiatives, Volume 1: Concepts, Methods and Contexts,: Aspen Institute, Washington, D.C.

Weiss, C.H. (1998) "Have we learned anything new about the use of evaluation?" American Journal of Evaluation, 19(1):21-33.

Cover photo of Social Policy Journal

Documents

Social Policy Journal of New Zealand: Issue 20

Evidence-Based Policy and Practice: Cross-Sector Lessons from the United Kingdom

Jun 2003

Print this page.